Tackling Cocktail Parties: The AI Hearing Aids that Read Your Mind
The human brain is unparalleled in its ability to pick out certain phrases and attention-grabbing phrases that pique our interest from the din of everyday life. Hearing aids, on the other hand, are not-so-hot at the so-called “cocktail party effect,” – rather than singling out the center of attention, they amplify all auditory stimuli equally.
However, this past week researchers unveiled a potential solution to this equal-amplification issue: a revolutionary AI (artificial intelligence) hearing aid that is able to read one’s mind. It works by employing artificial intelligence to segment the sounds of different speakers and sources while reading the brain’s activity to select a center of attention. From there, these hearing aids amplify that single voice/stimulus that the brain is focusing on, as explained in Science Advances.
This forward-thinking technology has unlimited potential to disrupt the hearing industry, but there are still significant hurdles to surmount before the consumer is able to benefit from this innovative research. For this AI-powered hearing aid to be widely accessible to all consumers, it cannot require implanting electrodes on the surface of the brain as it currently entails.
This innovative project is led by electrical engineer Nima Mesgarani of Columbia University’s Zuckerman Mind Brain Behavior Institute, one of many attempting to get hearing aids to mimic the hearing of a healthy individual. For instance, the $500 Bose Hearphone app uses a series of microphones that can be directionally-adjusted from one person to another to focus better on one specific auditory source as well as drowning out ambient noise from the environment. No current device, app or program is able to replicate how a normal brain hears: amplifying selected conversations from multiple different sources in a crowded environment.
“Even the most advanced digital hearing aids don’t know which voices they should suppress and which they should amplify,” Mesgarani states.
If hearing aids were able to unilaterally choose the correct focal point, it would make a major difference in the lives of the hard-of-hearing, says Richard Miller, Director of the Neural Prosthetics Program at the National Institute and Other Communication Disorders which funded the study. “There is real gold to be mined in that hill,” posits Miller.
Building off of his work with a graduate adviser in 2012, Mesgarani started to look for clues in the way that the brain processes stimuli. He found that when people engage in conversation, the listener’s brain waves echo the acoustic attributes of the speaker’s voice, keying in on that voice and filtering out unnecessary auditory provocations.
An individual with healthy hearing is able to focus on the sound of someone speaking through the brain’s secondary auditory cortexes. Found on both sides of the brain behind the ears, these regions amplify one voice over others by the simple means of paying attention. The sound of a friend or loved one, a name, or an emotionally-charged word (secret, confidential) causes a spike in activity in the auditory cortices, resulting in the perceived increase in volume.
This brain-controlled hearing aid will first separate distinct audio signals coming from different sources (people, music, television) and determines the voiceprint of each (frequency). From there, it detects brain waves in the listener’s auditory cortex which indicates where the listener’s attention lies. Finally, the system searches for the source that the brain is focused on and amplifies that incitation. The truly amazing aspect of this technology is that this process repeats every time a new voice is introduced or the attention switches.
This research showcases a growing list of studies that are tapping into the brain’s activity to make up for an output that the body cannot otherwise manage. For there to be widespread adoption of this technology, the mind-reading hearing aids would have to be made to work with using electrodes on the scalp. Luckily, the Columbia team is working on both the scalp version as well as with electrodes surrounding the ears. The earlier iterations of this artificial intelligence-powered hearing aid worked only on familiar voices that individuals had been trained to recognize. It could parse out these voices but couldn’t differentiate between unknown ones. Sounds good right? The next-gen device “can recognize and decode a voice – any voice – right off the bat,” Mesgarani said. To read the full story, click here.
Links: