After a couple rounds of writing and reflecting, and then talking to my classmates on Tuesday, I decided to choose my beak mask direction for the final. I've been thinking a lot about the potential interactions for the beak mask and trying to think of ways to incorporate the previous directions as well. Finally I came up with this idea, combining the beak and the very first prototype I made(the one converting language to icons), to create an experience of a new way of communication. A beak mask is placed on a stand in front of a "moon", a luminous circle on the wall. The journey begins when the participant takes the mask and put it on. You suddenly hear a robotic voice, saying something you don't understand. Following each utterance, a change in the space is triggered. You see the moon changing color, the light strobing, hear a bell ringing. What the voice is saying doesn't make much sense to you, but there is some level of familiarity to a known language. After a while you kind of get an idea of how the utterance triggers a response. You starts to follow it, mimicking the voice and see the response by yourself. Every time after you speak, that robotic voice also follows up with something, sounds like a response to you.
The experience is supposed to be surreal and otherworldly. Like stepping into a new world, or an unknown territory, everything seems so different from our lived experience and the communication with the locals could be confusing. There might be misunderstanding, confusion and conflict at first. But as we acknowledge the difference, and try to listen and learn how things work their way, we could eventually build a connection within the new environment, like mastering a language, or understanding how the people there perceive their world. At the end of the video, the human voice and robotic voice start to sync as a symbol of mutual understanding. The name of the work Talk To Me is a mental hint to encourage the audience to start talking and interacting, as well as a longing for communication and understanding.
The video above is a simulated scenario since I haven't successfully built the technical model yet. The following is an interactive program I made based on a speech recognition and synthesis algorithm. I couldn't make the synthesized voice appear at the right timing yet, and the recognition isn't accurate enough now. There are still a lot of technical problems to solve.
Click the image to try the interactive model: