Abstract

The human ability to classify acoustic sounds is still unmatched compared to recent methods in machine learning. Psychoacoustic and physiological studies indicate that the auditory system of mammals decomposes audio signals into their acoustic and modulation frequency components prior to further analysis. Since it is known that most linguistic information is coded in amplitude fluctuations, mimicking temporal processing strategies of the auditory system in automatic speech recognition (ASR) promises to increase recognition accuracies. We present an amplitude modulation filter bank (AMFB) that is used as a feature extraction scheme in ASR systems. The time-frequency resolution of the employed FIR filters, i.e., bandwidth and modulation frequency settings, are adopted from a psychophysically inspired model of Dau (1997) that was originally proposed to describe data from human psychoacoustics. Investigations on modulation phase indicate the need for preserving such information in amplitude modulation features. We show that the filter symmetry has an important impact on ASR performance. The proposed feature extraction scheme exhibits significant word error rate (WER) reductions using the Aurora-2, Aurora-4, and REVERB ASR tasks compared to other recent feature extraction methods, such as MFCC, FDLP, and PNCC features. Thereby, AMFB features reveal high robustness against additive noise, different transmission channel characteristics, and room reverberation. Using the Aurora-4 benchmark, for instance, an average WER of 12.33% with raw and 11.31% with bottleneck transformed features is attained, which constitutes a relative improvement of 19.6% and 29.2% over raw MFCC features, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call