Abstract

The issue of robustness in the presence of noise is regarded as a significant bottleneck in the commercialisation of speech recognition products, particularly in mobile environments. This paper examines the use of an auditory model combined with a speech enhancement algorithm as a robust front-end for a distributed speech recognition (DSR) system, whereby frontend functionality is implemented on a limited-resource consumer device like a mobile phone, while back-end classifier functionality is carried out by a remote server.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call