Last updated on Apr 16, 2018
Two well-known Professors of Speech and Hearing Science at Ohio State University have created a processing algorithm that will allow people with hearing loss to distinguish speech clearly from background noise. Instrumental to the new technology is an algorithm developed by DeLiang Wang, professor of computer science and engineering,
The hope for this discovery is that soon the algorithm, which makes hearing impaired listeners even more successful than those with normal hearing at clear understanding of speech in a noisy environment, will be implemented into hearing aids and cochlear implants. And these extra strong hearing devices could provide a better quality of life for millions.
In the same way it took years for phones to go from “bricks” to iphones, it may be a few years or more before we see this processing in hearing aids but the results so far are encouraging. In the meantime, digital hearing aids like our MDHearingAid AIR are providing those who cannot wait years with better processing and sound quality for a lower price.