Abstract
Meta-learning has proven to be a powerful paradigm for transferring knowledge from prior tasks to facilitate the quick learning of new tasks in automatic speech recognition. However, the differences between languages (tasks) lead to variations in task learning directions, causing the harmful competition for model’s limited resources. To address this challenge, we introduce the task-agreement multilingual meta-learning (TAMML), which adopts the gradient agreement algorithm to guide the model parameters towards a direction where tasks exhibit greater consistency. However, the computation and storage cost of TAMML grows dramatically with model’s depth increases. To address this, we further propose a simplification called TAMML-Light which only uses the output layer for gradient calculation. Experiments on three datasets demonstrate that TAMML and TAMML-Light achieve outperform meta-learning approaches, yielding superior results.Furthermore, TAMML-Light can reduce at least 80 %\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$\\%$$\\end{document} of the relative increased computation expenses compared to TAMML.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.