Starkey Laboratories, Inc., one of the world's leading hearing technology companies, is proud to announce that research conducted by JuanJuan Xiang, a new member of the Starkey Research Department, in conjunction with the University of Maryland and Johns Hopkins University was presented at the Association for Research in Otolaryngology (ARO) meeting in Baltimore last week. The ARO is an international association of scientists and physicians dedicated to scientific exploration in otolaryngology. The poster, called "Competing Streams at the Cocktail Party - A Neural and Behavioral Study of Auditory Attention," focuses on how the brain tunes out other talkers in a loud room in order to concentrate on one conversation.
"It is very difficult for a person with hearing loss to understand another person talking in a noisy room where many other people are talking," said Brent Edwards, Vice President of Research for Starkey. "One of the most pressing questions among hearing researchers in academia and the hearing aid industry alike is why people with hearing loss suffer so much more than those who hear normally in this situation. This is important research and we are proud to have JuanJuan on our team."
Starkey began its effort to answer this question several years ago when it opened the Starkey Hearing Research Center and conducted experiments on listening ability in complex environments. Since that time, the Center has collaborated with researchers at Boston University, UC Berkeley and UC Davis to help answer this question.
Using magnetoencephalography (MEG), an imaging technique that measures magnetic fields produced by changes in the brain's electrical activity, the researchers asked 26 volunteers to listen to one of two competing audio streams, while ignoring the other. The audio streams were comprised of a fast-paced series of beeps and a slower pattern of beeps, in addition to occasional rhythm changes. The results showed the people focusing on one of the patterns did not detect changes in the other. In addition, the brain showed neural activity for both audio streams, but was much more in sync with the pattern on which he or she was concentrating.
"This research offers a clue into how the brain is able to select which sound source to pay attention to when there are multiple sources," said Edwards. "Additionally, it provides a way to use MEG as a diagnostic tool to evaluate someone's ability to focus attention on single sound sources, which could be a mechanism for measuring and understanding the effect of both hearing loss and hearing aids on complex listening situations. This is a step forward toward understanding why those with hearing loss suffer so significantly in restaurants and other noisy conversational situations."
This research was funded by the National Institute on Deafness and Other Communication Disorders.