Abstract

This paper proposes a novel speaker adaptation algorithm for classifying speech based on deep neural networks (DNNs). The adaptation algorithm consists of two steps. In the first step a deep neural network is trained using raw Mel-frequency cepstral coefficient (MFCC) features to discover hidden structures in the data and employing the activations of the last hidden layers of the DNN as acoustic features. In the second step using nearest neighbor, an adaptation algorithm learns speaker similarity scores based on a small amount of adaptation data from each target speaker using the DNN-based acoustic features. Based on the speaker similarity score, classification is done using a k-nearest neighbor (k-NN) classifier. The novelty of this work is that instead of modifying and re-training the DNN for speaker adaptation, which comprises a large number of parameters and is computationally expensive, activations of the learned DNN are used to project features from MFCC to a sparse DNN space, then speaker adaptation is performed based on similarity (i.e. nearest neighbor) using k-NN algorithm. With only a small amount of adaptation data, it reduces the number of phoneme classification error in the TIMIT dataset by 23%. This work also analyzes impact of deep neural networks architecture on speaker adaptation performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.