Abstract

Multilingual pre-trained language models (mPLMs) have achieved remarkable performance on zero-shot cross-lingual transfer learning. However, most mPLMs implicitly encourage cross-lingual alignment in pre-training stage, making it hard to capture accurate word alignment across languages. In this paper, we propose Word-align ADapters for Cross-lingual transfer (WAD-X) to explicitly align word representations of mPLMs via language-specific subspace. Taking a mPLM as the backbone model, WAD-X constructs subspace for each source-target language pair via adapters. The adapters use statistical alignment as the prior knowledge to guide word-level aligning in the corresponding bilingual semantic subspace. We evaluate our model across a set of target languages on three zero-shot cross-lingual transfer tasks: part-of-speech tagging (POS), dependency parsing (DP), and sentiment analysis (SA). Experimental results demonstrate that our proposed model improves zero-shot cross-lingual transfer on three benchmarks, with improvements of 2.19, 2.50, and 1.61 points in POS, DP, and SA tasks over strong baselines.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.