Abstract

Low-resource automatic speech recognition is a challenging task due to a lack of labeled training data. To resolve this issue, multilingual meta-learning learns a better model initialization from many source language tasks for fast adaptation to unseen target languages. However, for diverse source languages, the quantity and difficulty vary greatly because of their different data scales and phonological systems. These differences lead to task-quantity and task-difficulty imbalance issues and thus a failure of multilingual meta-learning ASR. In this work, we propose a task-based meta focal loss (TMFL) approach to address this tough challenge. Specifically, we introduce a hard-task moderator and update the meta-parameters using gradients from both the support set and query set. Our proposed approach focuses more on hard tasks and makes full use of the data from hard tasks. Moreover, we analyze the significance of the hard task moderator and interpret its significance at the sample level. Experiment results show that the proposed method, TMFL, significantly outperforms the state-of-the-art multilingual meta-learning on all target languages for the IARPA BABEL and OpenSLR datasets, especially under very-low-resource conditions. In particular, it can reduce character error rate from 72% to 60% by fine-tuning the pre-trained model with about 22 hours of Vietnamese data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call