Abstract

Pseudo-labeling is the most adopted method for pre-training automatic speech recognition (ASR) models. However, its performance suffers with degrading quality of the supervised teacher model.Inspired by the successes of contrastive representation learning for both computer vision and speech applications, and more recently for supervised learning of visual objects [1], we propose Contrastive Semi-supervised Learning (CSL). CSL eschews directly predicting teacher generated pseudo-labels in favor of utilizing them to select positive and negative examples.In the challenging task of transcribing public social media videos, using CSL reduces the WER by 8%, compared to the standard Cross-Entropy pseudo-labeling (CE-PL), when 10hr of supervised data is used to annotate 75,000hr of videos. The WER reduction jumps to 19% under the ultra low-resource condition of using 1hr labels for teacher supervision. In out-of-domain conditions, CSL generalizes much better showing up to 17% WER reduction compared to the strongest CE-PL pre-trained model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.