Abstract

The paper describes an automatic speech recognition (ASR) system for the 3rd CHiME challenge that addresses noisy acoustic scenes within public environments. The proposed system includes a multi-channel speech enhancement front-end including a microphone channel failure detection method that is based on cross-comparing the modulation spectra of speech to detect erroneous microphone recordings. The main focus of the submission is the investigation of the amplitude modulation filter bank (AMFB) as a method to extract long-term acoustic cues prior to a Gaussian mixture model (GMM) or deep neural network (DNN) based ASR classifier. It is shown that AMFB features outperform the commonly used frame splicing technique of filter bank features even on a performance optimized ASR challenge system. I.e., temporal analysis of speech by hand-crafted and auditory motivated AMFBs is shown to be more robust compared to a data-driven method based on extracting temporal dynamics with a DNN. Our final ASR system, which additionally includes adaptation of acoustic features to speaker characteristics, achieves an absolute word error rate reduction of approx. 21.53 % relative to the best CHiME-3 baseline system on the real test condition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.