Abstract

Model Agnostic Meta-Learning (MAML) is an effective meta-learning algorithm for low-resource automatic speech recognition (ASR). It uses gradient descent to learn the initialization parameters of the model through various languages, making the model quickly adapt to unseen low-resource languages. But MAML is unstable due to its unique bilevel loss backward structure, which significantly affects the stability and generalization of the model. Since various languages have different contributions to the target language, the loss weights corresponding to the effects of diverse languages require costly manual adjustment in the training stage. Proper selection of these weights will influence the performance of the entire model. In this paper, we propose to apply a loss weight adaption method to MAML using Convolutional Neural Network (CNN) with Homoscedastic Uncertainty. The results of experiments showed that the proposed method outperformed previous gradient-based meta-learning methods and other loss weights adaption methods, and it further improved the stability and effectiveness of MAML.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call