Abstract
In this paper, the use of new auditory-based features derived from cochlear filters, have been proposed for classification of unvoiced fricatives. Classification attempts have been made to classify sibilant (i.e., /s/, /sh/) vs. non-sibilants (i.e., /f/, /th/) as well as for fricatives within each sub-category (i.e., intra-sibilants and intra-non-sibilants). Our experimental results indicate that proposed feature set, viz., Cochlear Filterbased Cepstral Coefficients (CFCC) performs better for individual fricative classification (i.e., a jump of 3.41 % in average classification accuracy and a fall of 6.59 % in EER) in clean conditions than the stateof-the-art feature set, viz., Mel Frequency Cepstral Coefficients (MFCC). Furthermore, under signal degradation conditions (i.e., by additive white noise) classification accuracy using proposed feature set drops much slowly (i.e., from 86.73 % in clean conditions to 77.46 % at SNR of 5 dB) than by using MFCC (i.e., from 82.18 % in clean conditions to 46.93 % at SNR of 5 dB).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal on Natural Language Computing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.