Abstract

Multilingual Transformer-based language models, usually pretrained on more than 100 languages, have been shown to achieve outstanding results in a wide range of cross-lingual transfer tasks. However, it remains unknown whether the optimization for different languages conditions the capacity of the models to generalize over syntactic structures, and how languages with syntactic phenomena of different complexity are affected. In this work, we explore the syntactic generalization capabilities of the monolingual and multilingual versions of BERT and RoBERTa. More specifically, we evaluate the syntactic generalization potential of the models on English and Spanish tests, comparing the syntactic abilities of monolingual and multilingual models on the same language (English), and of multilingual models on two different languages (English and Spanish). For English, we use the available SyntaxGym test suite; for Spanish, we introduce SyntaxGymES, a novel ensemble of targeted syntactic tests in Spanish, designed to evaluate the syntactic generalization capabilities of language models through the SyntaxGym online platform.

Highlights

  • Transformer-based neural models such as BERT (Devlin et al, 2019), RoBERTa (Liu et al, 2019b), DistilBERT (Sanh et al, 2019), XLNet (Yang et al, 2019), etc. are excellent learners

  • For Spanish, the multilingual models clearly outperform the monolingual model. This is likely due to the fact that while BETO and mBERT are of comparable size and are trained with the same amount of data (16GB), BETO is only trained with a Masked Language Modeling (MLM) objective, and mBERT is trained on MLM and Sentence Prediction (NSP)

  • We have shown that multilingual models do not generalize well across languages: mBERT generalizes better for phenomena in English, while XLM-R does it better for phenomena in Spanish

Read more

Summary

Introduction

Transformer-based neural models such as BERT (Devlin et al, 2019), RoBERTa (Liu et al, 2019b), DistilBERT (Sanh et al, 2019), XLNet (Yang et al, 2019), etc. are excellent learners. (Linzen et al, 2016; Marvin and Linzen, 2018; Futrell et al, 2019; Wilcox et al, 2019a) Most of these works focus on monolingual models, and, if the coverage of syntactic phenomena is considered systematically and in detail, it is mainly for English, as, e.g., (Hu et al, 2020a). This paper aims to shift the attention from monolingual to multilingual models and to emphasize the importance to consider the syntactic phenomena of languages other than English when assessing the generalization potential of a model. It systematically assesses how well multilingual models are capable to generalize over certain syntactic phenomena, compared to monolingual models, and how well they can do it for English, and for Spanish

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call