Abstract

This paper proposes the adaptation of neural network-based acoustic models using a Squeeze-and-Excitation (SE) network for automatic speech recognition (ASR). In particular, this work explores to use the SE network to learn utterance-level embeddings. The acoustic modelling is performed using Light Gated Recurrent Units (LiGRU). The utterance embed-dings are learned from hidden unit activations jointly with LiGRU and used to scale respective activations of hidden layers in the LiGRU network. The advantage of such approach is that it does not require domain labels, such as speakers and noise to be known in order to perform the adaptation, thereby providing unsupervised adaptation. The global average and attentive pooling are applied on hidden units to extract utterance-level information that represents the speakers and acoustic conditions. ASR experiments were carried out on the TIMIT and Aurora 4 corpora. The proposed model achieves better performance on both the datasets compared to their respective baselines with relative improvements of 5.59% and 5.54% for TIMIT and Aurora 4 database, respectively. These experiments show the potential of using the conditioning information learned via utterance embeddings in the SE network to adapt acoustic models for speakers, noise, and other acoustic conditions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.