Abstract

Existing automatic speech recognition (ASR) system uses the spectral or temporal features of speech. The performance of such systems is still poor compared to the human perception of hearing, especially in noisy environments. This paper concentrates on the extraction of spectro-temporal features based on physiological and psychoacoustically inspired approaches. Here, two dimensional Gabor filters are used to estimate the spectro-temporal features from time–frequency representation of uttered speech signals. The Gabor filters are designed using the concept of constant Q factor. It is found that human perception system maintains approximately constant Q in its frequency response along the chain of its filter bank. Constant Q analysis ensures that the Gabor filters occupy a set of geometrically spaced spectral and temporal bins. Time–frequency representation of speech signal is a key ingredient for Gabor based feature extraction method. For time–frequency mapping, Gammatonegram is adopted instead of conventional spectrogram representations. The performance of the ASR system with the proposed feature set is experimentally validated using AURORA2 noisy digit database. Under clean training; the proposed features obtained a relative improvement of about 50% in word error rate (WER) compared to Mel frequency cepstral coefficients (MFCC) features. A relative improvement of 23% in WER is also obtained compared with that of existing spectro-temporal feature extraction methods. Further analysis is carried out on TIMIT corrupted with noise samples taken from the NOISEX-92 database. The experimental verification proves the robustness of proposed features in building a robust acoustic model for the ASR system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.