SpeechBubbles: Spatialize Audio with AR (MIT Hackathon 2017)

During the second annual Reality Virtually Hackathon at MIT’s Media Lab, the Drexel VR club decided to divide our talent and work with new teams to expand our network. My effort went towards an idea that I thought would be particularly beneficial to work on at a hackathon. I was struggling to fully work through a collection of thoughts and I wondered:

“What if there was a way to get thoughts into the physical world and organize them like you do tangible objects?”

I pitched the idea, inspired a team and brought the vision to life. What we came away with is unique interaction that uses AR to tie audio notes to the physical world.

Prior to the hackathon, I had been tinkering with Apple’s ARkit and I figured it’d be the perfect platform to build this app on. We wanted the app to be accessible whenever a thought sparked, record thoughts quickly and create an intuitive relationship between the device and the physical world. I came up with the metaphor of using your phone as a bubble wand to ‘blow’ your ideas into 3D space.

Creating an app with interactions that felt natural was our largest priority. Sketching in combination with my 3D animation and Unity skills helped me create rapid prototypes. This enabled our team to learn which subtle cues reinforced the interaction feedback loops in our app; for example, the way that a bubble expands as you record for longer periods of time and animating a waveform when audio is recording/playing back. These cues helped our users feel safe with their interactions and stay engaged with their thought bubbles.


Rapid prototyping also answered our biggest question: How do users trigger the audio to playback? We discovered that it’s incredibly satisfying to have the device collide with the bubble, triggering the audio clip. It almost feels as if you are traveling back to the original moment of your thought because the physical space you occupy is the same as when it initially occurred.

Created in 36 hours at MIT Media Lab’s 2017 Reality Virtually Hackathon.

Team: Raymond Huang, Samaneh Kazemi Nafchi, Stephanie D. McNiel, Xuefei Yang

Tools I used: ARkit, Unity, Maya

1 thought on “SpeechBubbles: Spatialize Audio with AR (MIT Hackathon 2017)”

Leave a Reply

Your email address will not be published. Required fields are marked *