Abstract

A blind approach for estimating the signal to noise ratio (SNR) of a speech signal corrupted by additive noise is proposed. The method is based on a pattern recognition paradigm using various linear predictive based features, a vector quantizer classifier and estimation combination. Blind SNR estimation is very useful in speaker identification systems in which a confidence metric is determined along with the speaker identity. The confidence metric is partially based on the mismatch between the training and testing conditions of the speaker identification system and SNR estimation is very important in evaluating the degree of this mismatch. The aim is to correctly estimate SNR values from 0 to 30 dB, a range that is both practical and crucial for speaker identification systems. Experiments consider (1) artificially generated additive white Gaussian noise, pink noise and bandpass noise and (2) fifteen noise types from the NOISEX database. Four features are combined to get the best results. The average SNR estimation error depends on the type of noise in that a relatively low error results for pink noise and jet cockpit noise and a high error results for destroyer operations room noise and military vehicle noise. For both artificially generated noise and the NOISEX data, the error is lower than what is achieved by the IMCRA method that uses SNR estimation for speech enhancement. Combining the four features with IMCRA lowers the error for 8 of the 15 noise types from NOISEX.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.