Abstract

Here we demonstrate a two-point neuron-inspired audio-visual (AV) open Master Hearing Aid (openMHA) framework for on-chip energy-efficientspeech enhancement (SE). The developed system is compared against state-of-the-art cepstrum-based audio-only (A-only) SE and conventional point-neuron inspired deep neural net (DNN) driven multimodal (MM) SE. Pilot experiments demonstrate that the proposed system outperforms audio-only SE in terms of speech quality and intelligibility and performs comparably to point neuron-inspired DNN with a significantly reduced energy consumption at any time, both during training and inferencing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call