Abstract

Transformer-based language models achieve high performance on various tasks, but we still lack understanding of the kind of linguistic knowledge they learn and rely on. We evaluate three models (BERT, RoBERTa, and ALBERT), testing their grammatical and semantic knowledge by sentence-level probing, diagnostic cases, and masked prediction tasks. We focus on relative clauses (in American English) as a complex phenomenon needing contextual information and antecedent identification to be resolved. Based on a naturalistic dataset, probing shows that all three models indeed capture linguistic knowledge about grammaticality, achieving high performance.Evaluation on diagnostic cases and masked prediction tasks considering fine-grained linguistic knowledge, however, shows pronounced model-specific weaknesses especially on semantic knowledge, strongly impacting models’ performance. Our results highlight the importance of (a)model comparison in evaluation task and (b) building up claims of model performance and the linguistic knowledge they capture beyond purely probing-based evaluations.

Highlights

  • Endeavors to better understand transformer-based masked language models (MLMs), such as BERT, are ever growing since their introduction in 2017 (cf. Rogers et al (2020) for an overview)

  • We focus on relative clauses (RCs) in American English to further enhance our understanding of the grammatical and semantic knowledge captured by pre-trained MLMs, evaluating three models: BERT, RoBERTa, and ALBERT

  • We note that ALBERT has significantly less parameters than BERT and RoBERTa (12M vs. 110M and 125M), which might explain the lower

Read more

Summary

Introduction

Endeavors to better understand transformer-based masked language models (MLMs), such as BERT, are ever growing since their introduction in 2017 (cf. Rogers et al (2020) for an overview). We focus on RCs in American English to further enhance our understanding of the grammatical and semantic knowledge captured by pre-trained MLMs, evaluating three models: BERT, RoBERTa, and ALBERT. We train probing classifiers, consider each models’ performance on diagnostic cases, and test predictions in a masked language modeling task on selected semantic and grammatical constraints of RCs. BERT (Devlin et al, 2019) is a transformer-based (Vaswani et al, 2017) bidirectional network trained on masked language modeling and next-sentence-prediction. The extent to which BERT captures linguistic knowledge is widely studied in previous works (see §2.2). We consider the base variants: BERT-base-cased, RoBERTa-base, ALBERT-base-v1 with 110M, 125M, and 12M parameters, respectively

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call