Abstract

Commonly used feature extraction methods for automatic speech recognition (ASR) incorporate only rudimentary psychoacoustic findings. Several works showed that a physiologically closer auditory processing during the feature extraction stage can enhance the robustness of an ASR system in noisy environments. The “auditory image model” (AIM) is such a more sophisticated computational model. In this work we show how invariant integration can be applied in the feature space given by the AIM, and we analyze the performance of the resulting features under noisy conditions on the Aurora-2 task. Furthermore, we show that previously presented features based on power-normalization and invariant integration benefit from the AIM-based integration features when the feature vectors are combined with each other.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call