A next-generation hearing aid which can 'see' is to be developed by a University of Stirling Computer Scientist-led multidisciplinary team of researchers and clinicians.
Designed to help users in noisy environments, the device will use a miniaturised camera that can lip read, process information in real time, and seamlessly fuse and switch between audio and visual cues.
There are more than 10 million people in the UK – one in six of the population - with some form of hearing loss. By 2031, this is estimated to rise to 14.5 million.
Professor Amir Hussain is leading the ambitious joint research project, which has received nearly £500,000 from the UK Government’s Engineering and Physical Sciences Research Council (EPSRC) and industry.
Professor Hussain said: "This exciting world-first project has the potential to significantly improve the lives of millions of people who have hearing difficulties.
"Existing commercial hearing aids are capable of working on an audio-only basis, but the next-generation audio-visual model we want to develop will intelligently track the target speaker's face for visual cues, like lip reading. These will further enhance the audio sounds that are picked up and amplified by conventional hearing aids.
"The 360-degree approach to our software design is expected to open up more everyday environments to device users, enabling them to confidently communicate in noisier settings, with a potentially reduced listening effort. This will be important to help ensure uptake of this transformational technology.
"In addition to people with hearing loss, the unique lip reading capabilities of this device could also prove potentially valuable to those communicating in very noisy places where ear defenders are worn, such as in factories, and in emergency response scenarios."
Professor Hussain’s team has been working on a prototype and the research investment will be put towards tackling the key challenge of blending and enhancing appropriately selected audio and visual cues. Speed is crucial, with time delays for hearing aids to be less than 10ms.
Stirling Psychologist Professor Roger Watt will work with Professor Hussain and help to develop new computing models of human vision for real-time tracking of facial features.
Once developed, the software prototype will be made freely available to other researchers worldwide, opening up the opportunity for further work in the field.
Future hardware prototyping research will explore the most user-friendly and aesthetically pleasing placements of the mobile mini camera attachment, such as fitting it into a pair of ordinary glasses, wearable brooch, necklace or even an earring.
Professor Hussain is also collaborating with Dr Jon Barker at the University of Sheffield, who has developed biologically-inspired approaches for separating speech sources that will complement the audio-visual enhancement techniques pioneered at Stirling. Project partners include: the MRC/CSO Institute of Hearing Research - Scottish Section, and the international hearing-aid manufacturer, Phonak.
The Institute’s Dr William Whitmer said: "We are excited about the potential ability for this new technology - that takes advantage of the similar information presented to the eyes and ears in noisy conversation - to aid listening in those difficult situations, a consistent issue for those affected by hearing loss."