Abstract
The task of developing an automatic speaker verification (ASV) system for children's speech is a formidable one due to a number of reasons. The dearth of domain-specific data is one among them. The challenge further intensifies with the introduction of short utterances of speech, a relatively unexplored domain in the case of children's ASV. Voice-based biometric systems suffers miserably when speech data, inadequate both in volume as well as in duration, is used either for enrollment or verification. To circumvent the issue arising due to data scarcity, the work in this paper extensively explores in-domain as well as various out-of-domain data augmentation techniques. A data augmentation approach is proposed that encompasses both in-domain and out-of-domain data augmentation techniques. The in-domain data augmentation approach, incorporates speed perturbation of children's speech. The out-of-domain data used are from adult speakers which are known to have acoustic attributes in stark contrast to child speakers. The acoustic characteristics of the adult speech data in this study are altered on two fronts namely speech waveform modification and feature-level modification, in order to modify the adult acoustic features and render it acoustically similar to children's speech prior to augmentation. While the speech waveform modification involves various signal processing techniques like prosody modification, formant modification and voice-conversion. The feature-level modification on the other hand involves Vocal-tract length normalization technique (VTLN) which explicitly models and compensates for the ill-effects of variations in vocal tract length by linearly warping the frequency axis of speech signals. The proposed data augmentation approach helps not only in increasing the amount of training data but also in effectively capturing the missing target attributes which helps in boosting the verification performance. A relative improvement of 48.01% in equal error rate (EER) with respect to the baseline system is a testimony of it. Furthermore, the conventionally used Mel-frequency cepstral coefficients (MFCC) are known to average out the higher-frequency components. Prior literary works have shown that a significant amount of relevant acoustic information is available in the higher-frequency region of the children's speech. Therefore, effective preservation of higher-frequency contents in children's speech is of paramount importance which must be appropriately tackled for the development of a reliable and robust children's ASV system. In this regard, frame-level concatenation of the MFCC features with the Inverse-Mel-frequency cepstral coefficient (IMFCC) features is undertaken. The feature concatenation of MFCC and IMFCC is carried out with the sole intention of effectively preserving the higher-frequency contents in the children's speech data. The low canonical correlation existing between the MFCC and the IMFCC feature vectors provides the necessary impetus to go with their feature fusion. The feature concatenation approach, when combined with proposed data augmentation, helps in further improvement of the verification performance. The experimental results testify our claims, wherein we have achieved an overall relative reduction of 50.15% for equal error rate.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.