Abstract

Pretrained multilingual text encoders based on neural transformer architectures, such as multilingual BERT (mBERT) and XLM, have recently become a default paradigm for cross-lingual transfer of natural language processing models, rendering cross-lingual word embedding spaces (CLWEs) effectively obsolete. In this work we present a systematic empirical study focused on the suitability of the state-of-the-art multilingual encoders for cross-lingual document and sentence retrieval tasks across a number of diverse language pairs. We first treat these models as multilingual text encoders and benchmark their performance in unsupervised ad-hoc sentence- and document-level CLIR. In contrast to supervised language understanding, our results indicate that for unsupervised document-level CLIR—a setup with no relevance judgments for IR-specific fine-tuning—pretrained multilingual encoders on average fail to significantly outperform earlier models based on CLWEs. For sentence-level retrieval, we do obtain state-of-the-art performance: the peak scores, however, are met by multilingual encoders that have been further specialized, in a supervised fashion, for sentence understanding tasks, rather than using their vanilla ‘off-the-shelf’ variants. Following these results, we introduce localized relevance matching for document-level CLIR, where we independently score a query against document sections. In the second part, we evaluate multilingual encoders fine-tuned in a supervised fashion (i.e., we learn to rank) on English relevance data in a series of zero-shot language and domain transfer CLIR experiments. Our results show that, despite the supervision, and due to the domain and language shift, supervised re-ranking rarely improves the performance of multilingual transformers as unsupervised base rankers. Finally, only with in-domain contrastive fine-tuning (i.e., same domain, only language transfer), we manage to improve the ranking quality. We uncover substantial empirical differences between cross-lingual retrieval results and results of (zero-shot) cross-lingual transfer for monolingual retrieval in target languages, which point to “monolingual overfitting” of retrieval models trained on monolingual (English) data, even if they are based on multilingual transformers.

Highlights

  • Cross-lingual information retrieval (CLIR) systems respond to queries in a source language by retrieving relevant documents in another, target language

  • (2) We show that multilingual sentence encoders, fine-tuned on labeled data from sentence pair tasks like natural language inference or semantic text similarity as well as using parallel sentences, substantially outperform general-purpose models in sentence-level CLIR (Sect. 4.3); further, they can be leveraged for localized relevance matching and in such a pooling setup improve the performance of unsupervised documentlevel CLIR (Sect. 4.4)

  • (5) we show that fine-tuning supervised CLIR models based on multilingual transformers on monolingual (English) data leads to a type of “overfitting” to monolingual retrieval (Sect. 5.3): We empirically show that language transfer in IR is more difficult in true cross-lingual IR settings, in which query and documents are in different languages, as opposed to monolingual IR in a different language

Read more

Summary

Introduction

Cross-lingual information retrieval (CLIR) systems respond to queries in a source language by retrieving relevant documents in another, target language Their success is typically hindered by data scarcity: they operate in challenging low-resource settings without sufficient labeled training data, i.e., human relevance judgments, to build reliable in-domain supervised models (e.g., neural matching models for pairwise retrieval Yu and Allan 2020; Jiang et al 2020). Litschko et al (2019) have shown that language transfer by means of cross-lingual embedding spaces (CLWEs) can be used to yield state-of-the-art performance in a range of unsupervised ad-hoc CLIR setups This approach uses very weak cross-lingual (in this case, bilingual) supervision (i.e., only a bilingual dictionary spanning 1–5 K word translation pairs), or even no bilingual supervision at all, in order to learn a mapping that aligns two monolingual word embedding spaces (Glavaš et al 2019; Vulić et al 2019). Contextual text representation models alleviate this issue (Liu et al 2020) because they encode occurrences of the same word differently depending on its context

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call