For many of us, years of clinical practice have armed us with a lot of what we consider to be conventional wisdom about many aspects of hearing aids. For at least some of us, new truths take a bit of time to digest and absorb. The purpose of this article is to consider some of the recent research findings and how they may alter our biases and clinical practices.
1. Hearing aids work. While this may not be a new truth to many, a number of years ago the attitude amongst many medical professionals was that hearing aids dont work with ''nerve''loss. Many of the older consumers still need convincing that hearing aids do work, and especially for ''nerve'' loss, where surgical and/or other medical intervention is not an option. In fact, all of our recent efforts re multiple channels, multiple bands, and multiple kinds of compression in a single device could be construed as overkill.
In 1995, a group of Dutch investigators published fairly pretty provocative results concerning the gain and shaping of the frequency response for optimal speech perception and quality of sound judgments (VanBuuren, Festen & Plomp, 1995). Of the 25 different spectra studied, the majority were not judged to be significantly different (either in speech intelligibility or sound quality) by their subjects. The responses that were judged to be less acceptable were those with minimal gain, or output approaching the discomfort zone, or those with low frequency gain and little or no high frequency gain. A more recent series of studies by Karolina Smeds (Swedens Royal Institute of Technology) looked at speech intelligibility, loudness and preference across four prescriptive formulas (NAL-NL-1, DSL i/o, Camfit, and AUD). She reports that ''only small differences in speech recognition were seen..., in spite of large differences in gain-frequency response. (p.21)''
In a recent study done in my lab (Bentler & Duve, 2000), we found little difference in speech perception ability (as measured using the HINT and CUNY sentences) across hearing aids utilizing linear processing, fast acting compression (Resound), slow-acting compression (Senso'), adaptive compression (K-Amp'), or fast and slow acting compression (DigiFocus'). What did matter was the amount of audibility provided, and it must be acknowledged that dynamic range compression processors are able to provide that audibility over a wider range of inputs than any other type of processor. Furthermore, whether that gain is processed in an analog or digital manner does not seem to matter (for now, lets ignore the other advantages of digital processing). Gain is gain.
For those of us who are target-crazy, high-tech (multi everything) fans, we should take some time to consider that the old Pascoe axiom still holds: Audibility still is the best determinant of audibility (my words, obviously). Once that optimal audibility is achieved for a given individual, it remains debatable whether further fine-tuning of the response or the compression parameters will enhance communication for that listener.
2. Matching high frequency targets may not be desirable. Fitting individuals with precipitous loss successfully has been the bane of most of our existences throughout our professional lives! Even if we are able to control the resultant feedback -- a distinct possibility with some of the current digital processing schemes -- the complaints of unnatural/tinny/harsh sounds quality can be equally obtrusive. Recent evidence from a couple different groups of investigators suggests we should no longer focus on providing gain to persons whose high frequency thresholds exceed about 60 dB HL, but rather focus on the mid or lower frequency range where gain is more readily achievable and usable.
Cindy Hogan and Chris Turner (1998) systematically studied the improvement gained in both audibility (as measured with the Articulation Index) and speech perception ability as they increased the high frequency cutoff frequency of filtered speech. They discovered that individuals with severe loss in the high frequency range did not benefit from increased audibility, and, in fact, occasionally showed a decrease in speech perception scores. Similar results have been proferred by scientists at the National Acoustic Laboratories (Ching, Dillon, & Byrne, 1998). Obviously, further research is necessary to determine how these findings apply to the prelingually hearing impaired child. In the meantime, for the typical hearing aid wearer, we might focus on other aspects of the fitting and stop chasing the high frequencies!
3. Two-microphone directional designs dont (necessarily) provide better directional hearing than single microphones (with two ports). Over the past few years I have been approached a number of times by long-time practicing audiologists and hearing aid specialists who have voiced a sigh of relief that, finally, we have good directional hearing aids due to the two-mic designs. Indeed, we do have good directional hearing aids. Some come with a single mic having two ports using an acoustic delay, and some come with two mics with single ports, using an electrical delay.
Todd Ricketts (at Vanderbilt) has published a number of articles over the past year or so with strong evidence that a single mic design works as effectively as a two mic design, that digital implementation of the directional mic may not be a factor in effectiveness (more later) and that the speech perception in noise benefit measured with directional hearing aids depends primarily upon the placement of the background noise. If I merely wanted to convince a patient that directional mics really can work, I would situate him/her in the booth so that the noise source fell in the null of the polar pattern, and then have him/her switch back and forth between omni and directional modes. I will admit that this is a contrived arrangement, but most effective in showing the potential benefit to the wary patient.
We have been involved in a number of studies involving directional hearing aids in my lab recently. The number and location of the background interference speakers, the reverberation characteristics of the room, and the use of one versus two hearing aids all have more impact on the speech perception results than does the number of microphones making up the directional pattern.
4. Digital processing of sound has more potential for error (distortion) than analog processing of sound. This one is hard for many to accept. When digital hearing aids hit the market in the mid 90s, many of us believed we had taken the step from phonograph records to CDs in hearing aids. The engineers were quick to point out, however, that the advantages of the first generation dsp hearing aids were to industry. Basically, digital hearing aids still use mics and receivers (for the most part) of their analog counterparts, the sound quality could not be too different.
As Chris Schweitzer points out in his tutorial on the development of digital hearing aids (1997), ''...the faster the sampling rate and the greater the number of bits for data representation, the closer the digital signal will be to the original analog signal''(p.46).
Still, capable engineers have figured out how to deal with the size and power consumption issues, as well as the quantization ''noise'', aliasing error, and anti-imaging error. Digital chips, in the long run, are cheaper to produce, easier to replace, and have the potential for doing more powerful signal processing than analog amplifiers. It is that potential we should be celebrating.
5. Current noise reduction schemes may not improve speech perception in noise any better than previous attempts. In the past several years, the use of noise reduction algorithms has (again) become prolific. As many of you will recall, in the late 80s and early 90s, noise reduction schemes such as adaptive filtering, adaptive compression, low frequency compression, and so on were touted as having the capability of improving speech perception in noise. In 1994, the FDA issued a ''cease and desist'' order, mandating that the manufacturers stop ''false'' advertising, and use only claims that could be substantiated by scientific evidence. Since digital hearing aids hit the U.S. market, implementation of newer noise reduction schemes has been possible. For many of those noise reduction schemes, however, the outcome has been easier listening rather than better speech perception ability in noise. At a recent scientific meeting Wouter Dreschler from The Netherlands presented provocative evidence that while all noise reduction algorithms reacted to a certain degree to noises (modulated and unmodulated ICRA noises), in some implementations the continuous noise gave less noise reduction than the modulated (speech-like) noise! He agrees that result was unexpected and counterintuitive. With respect to speech perception, almost no benefit was found with the systems when the noise reduction algorithm was activated. Still, the potential for effective noise-reduction algorithms in digital circuitry holds promise. Stay tuned.
6. Satisfaction is inversely related to the cost of the hearing aid. Sergei Kochkin has been providing Marketrak data of consumer hearing aid use and satisfaction for our consumption for the past ten years. In his most recent analysis, across thousands of hearing-impaired individuals, he reported the average digitally programmable (analog) hearing aid achieves a customer satisfaction rating 16% higher than the nonprogrammable product (Kochkin, 1999). He concludes, ''In short, penetration of the hearing aid market by advanced technology is key to improving customer satisfaction (p.46)'' Because higher technology hearing aids do carry a higher price tag, there is the implication that the higher prices will result in higher levels of satisfaction. Recent data from Robyn Cox (University of Memphis) disputes the higher satisfaction claim. Her yet-to-be published data indicates a negative relationship between what subjects paid for their hearing aids (after third party and family contributions) and their level of satisfaction with them.
7. Presentation does impact outcome measures. Of great concern to me over the past few years, have been the stated and published outcomes of many clinical trial studies of newer, higher technology hearing aids. In virtually every reported data set, there was no measurable advantage in speech recognition scores that could not attributed to the nonlinear amplifier or clean output limiting being utilized. Yet, a typical conclusion of many of these investigations has been that while no objective proof of superiority could be shown, subjectively the subjects preferred the digital hearing aids, this discrepancy due, the study would conclude, to the insensitivity of our measures. Missing in most of those studies was blinding (preferably double-blinding). Without blinding the subject to the intent of the study, such conclusions cannot be accepted.
Perhaps the most provocative study done in my lab in the past few years is the one referred to as the ''Hype Study'' (Bentler, Niebuhr, & Johnson, 2000). In that investigation, we were interested in quantifying the change in benefit (both objective as well as self-reported) that accompanies the patient belief that they have the newest, highest technology hearing aid. Since the recent marketing by many of the major companies has been pervasive, and testimonials freely used as proof of technology superiority, we attempted to measure how much of the benefit was real and how much was perceived (which is still real, in a sense). Studies of placebo effect and the impact of placebo treatments are well known. It has been often reported that under conditions of heightened expectations outcomes from variety of interventions, known to be bogus, can be quite effective.
A favorite study to illustrate that point was conducted by a duo of Japanese scientists (Benson, 1996). Intrigued by the fact that a number of patients exhibited an allergic reaction (similar to poison ivy rash) simply by being in some proximity to a culprit vine, the scientists tested 57 young men to assess their reaction to exposure. The only catch was that the arm brushed with the leaves of the poisonous leaves did not react as did the arm brushed with harmless leaves, when the subjects were given false information. ''Within minutes, the arm the boys believed to have been brushed with the poisonous tree began to react, showing red and developing bumps, causing itching and burning sensations, while in most cases the arm that had actual contact with the poison did not react (p.59).''
More recently, bogus brain surgery, bogus orthopedic surgery and even bogus migraine treatments have been found to be effective when proffered with confidence.
In our study, we had one of three groups of subjects wear the same set of hearing aids for two months. One month the subjects were told they had the highest technology (digital) available; the other month they were told they had a more conventional hearing aid. Not only did the vast majority of subjects prefer the ''digital'' hearing aids, the outcome measures showed significantly more benefit from those labelled ''digital'' hearing aids.
8. Disposable hearing aids arent cheaper. We now have one disposable hearing aid (DHA) available in this country. The aid is essentially a one-size-fits all, with a lifetime of 45 days. No batteries to buy; no excessive cleaning necessary. The hearing aid industry is taking a hit from this concept. If a disposable aid can sell for $39, why is the suggested retail price for most of the digital models in excess of $2500? The DHA utilizes a sophisticated amplifier; the bandwidth is good and the distortion low. Why would anyone reject such a deal?
A quick calculation of cost over time suggests that after five years (the typical length of ownership for most custom made hearing aids) the consumer has an approximate outlay of $1825 per ear ($3650 for bilateral use) for DHAs. Comparable, conventional custom in-the-canal hearing aids will cost more up front (perhaps $1200), but that cost is relatively stable over the next five years (even given the cost of batteries and one out-of-warranty repair). The custom ITC hearing purchase would result in a significant cost savings over the life of the instrument.
The drawbacks to the DHA (no programmability, no VCW, shell fit) are no longer outweighed by the low initial investment. But lest we ignore the DHA potential, lets keep in mind there are still some15 million persons with hearing loss who should but dont wear hearing aids. If the ease of fit, and low initial cost of the DHA pulls any number of those non-users into the market, both the industry and the general population will benefit.
Bentler, R. and Duve, M. (In Press). Comparison of hearing aids over the 20th Century. Ear and Hearing.
Bentler, R., Niebuhr, D., and Johnson, T. (In Preparation). Impact of digital labelling on outcome measures.
Benson, H. (1996). Timeless healing: The power and biology of belief. New York: Simon and Schuster.
Ching, T., Dillon, H. and Byrne, D. (1998). Speech recognition of hearing-impaired listeners: Predictions from audibility and the limited role of high-frequency amplification. Journal of the Acoustical Society of America, 103, 1128-1140.
Hogan, C.A., and Turner, C.W. (1998). High frequency audibility: Benefits for listeners with high frequency hearing loss. Journal of the Acoustical Society of America, 104, 432-441.
Kochkin, S. (1999). MarkeTrak V: Baby boomers spur growth in potential market, but penetration declines. Hearing Journal, 52, 33-49.
Schweitzer, C. (1997). Development of digital hearing aids. Trends in Amplification, 2, 41-77.
Smeds, K. (2000). Comparison of threshold-based fitting methods for nonlinear (WDRC) hearing aids - speech intelligibility, loudness and preference (Paper C).Licentiate Thesis. Royal Institute of Technology, Stockholm, Sweden.
Van Buuren, R.A., Festen, J.M., and Plomp, R. (1995). Evaluation of a wide range of amplitude-frequency responses for the hearing impaired. Journal of Speech and Hearing Research, 38, 211-221.