Abstract
Social media text can be classified in different ways, viz sentiment analysis, humour detection, hate speech detection and hope speech detection. Multitask learning (MTL) models built on Large Language Models (LLMs) eliminate the need to build separate models for each of these tasks. However, building MTL models by fully fine-tuning the LLM has limitations such as catastrophic forgetting and requiring complete retraining to add a new task. AdapterFusion was introduced to address these limitations. However, existing AdapterFusion techniques have not been experimented with code-mixed or code-switched text. Moreover, they only considered task-based AdapterFusion on monolingual LLMs. However, using monolingual LLMs is sub-optimal in classifying code-mixed or code-switched text. A better alternative is multilingual LLMs. In this paper, we present an MTL model that combines task AdapterFusion with language adapters on top of a multilingual LLM. We combine language adapters sequentially, in parallel, and as a fusion with task adapters to capture cross-lingual knowledge in code-mixed and code-switched text. We believe that this is the first research to introduce language-based AdapterFusion.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have