Abstract
Self-supervised models have demonstrated remarkable performance in speech processing by learning latent representations from large amounts of unlabeled data. Adapting these models to low-resource languages yields promising results, but the computational cost of fine-tuning all model parameters is prohibitively high. Adapters offer a solution by introducing lightweight bottleneck structures into pre-trained models for downstream tasks, enabling efficient parameter adaptation. However, randomly initialized adapters often underperform in extremely low-resource scenarios. To address this issue, we explore the Meta-Adapter for self-supervised models and analyzed some limitations of Meta-Adapter including poor learning in language-specific knowledge and meta-overfitting problems. To relieve these problems, we propose the Meta-Adaptable-Adapter (MAA), a new meta leaning algorithm that adapts to low-resource languages quickly and effectively. MAA learns task-specific adapters for feature extraction, and task-independent adapters for feature combination. The experiments on three datasets show superior performance on 31 low-resource languages across seven different language families compared to other adapters, showing better generalization and extensibility.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.