Self Affirmation Mirror
The goal of this project is to encourage the user to boost their self-esteem in a fun, interactive, and public way that helps bring talking to yourself in the mirror out of the shadows. We, as designers, want to encourage people to uplift themselves unapologetically.​The Self Affirmation Mirror is an interactive piece for users to hype themselves up. The user sees their spoken self-affirmations appear as text around their reflection. There are stars lining the bottom of the screen that turn yellow with each compliment the user gives themselves. After five compliments are made, the mirror displays the final message telling the user “YOU ARE A STAR”.

Process
Our initial goal was to create some sort of assistive technology. We tried different problem spaces including nursing homes, physical therapy, and mental health. ​After doing preliminary research in the areas, we focused on mental health. The idea started as a hack-a-thon project that was about self-affirmation. The initial prototype was all on a computer.


Building on the first iteration from the hack-a-thon we integrated it into a physical mirror.
We considered the environment/setting that the mirror should be designed for. Our two options included an in-home private daily experience or a public installation experience. We picked a public installation because it was more accessible and introduces self-positivity into the public sphere.
Another part of the user experience that we focused on was on the quantity and the placement of the “I am” statements. Suggestions were filling the whole screen, filling the whole screen except the face of the user, or having a set goal of statements. We decided to have a fixed number to aim for which allows the user for a distinct endpoint. We changed the vertical bar to five horizontal stars along the bottom of the screen. Each star gets filled in when a new message is said. The stars tie together the final message to the whole experience, thus creating a clear goal for the user.
Code
This project was coded using Python, the visuals are done using the Pygame feature of python to draw images and text. We also are using Google’s speech-to-text API. Our code runs two threads, one for the API and one for the drawing. The API processes live audio input which we use to match on a set of predetermined “I am” statements. The drawing thread goes through the dictionary of statements said and prints out each message with a corresponding color at a random location. The reset of the interaction is a new feature that facilitates the use of the mirror as an exhibit as many users can get the full experience without setup.
We implemented mutual exclusion, locking the dictionary while being used in each thread so that it wouldn’t change while being used.

We built the frame from ½” x 2” planed wood boards. The frame has four front pieces and four pieces making up the rim. All the pieces were cut diagonally on the miter saw so that they would fit together cleanly at the corners. We glued and nailed the rim together, and then glued the front pieces to the rim. We didn’t nail the front pieces to the rim, because we didn’t want any nails visible on the front. The table we clamped the frame to while the glue was drying wasn’t perfectly flat, and along with the miter cuts not being perfect, this contributed to small gaps in the corners of the frame. The frame is reinforced in the corners by wood triangles that also hold the TV. We drilled a centered hole on the bottom of the frame for the power cable and a centered hole in the bottom front of the frame for the microphone. Inside the frame in front of the TV is a sheet of 30% mirrored acrylic that we laser cut to size.
In collaboration with Madison Szittai, Rhaime Kim, Benjamin Phelps, and Issra Said
My primary role was coding the functionality in Python and connecting the Raspberry Pi. I was also involved in all the design and assembly as a secondary role.