Abstract

Textual representation learners trained on large amounts of data have achieved notable success on downstream tasks; intriguingly, they have also performed well on challenging tests of syntactic competence. Hence, it remains an open question whether scalable learners like BERT can become fully proficient in the syntax of natural language by virtue of data scale alone, or whether they still benefit from more explicit syntactic biases. To answer this question, we introduce a knowledge distillation strategy for injecting syntactic biases into BERT pretraining, by distilling the syntactically informative predictions of a hierarchical—albeit harder to scale—syntactic language model. Since BERT models masked words in bidirectional context, we propose to distill the approximate marginal distribution over words in context from the syntactic LM. Our approach reduces relative error by 2–21% on a diverse set of structured prediction tasks, although we obtain mixed results on the GLUE benchmark. Our findings demonstrate the benefits of syntactic biases, even for representation learners that exploit large amounts of data, and contribute to a better understanding of where syntactic biases are helpful in benchmarks of natural language understanding.

Highlights

  • Large-scale textual representation learners trained with variants of the language modeling (LM) objective have achieved remarkable success on downstream tasks (Peters et al, 2018; Devlin et al, 2019; Yang et al, 2019)

  • Because we observe a different pattern of results on the Corpus of Linguistic Acceptability (CoLA; Warstadt et al, 2018) than

  • Our findings indicate a partial dissociation between model performance on these two types of tasks; supplementing GLUE evaluation with some of these structured prediction tasks can offer a more holistic assessment of progress in natural language understanding (NLU)

Read more

Summary

Introduction

Large-scale textual representation learners trained with variants of the language modeling (LM) objective have achieved remarkable success on downstream tasks (Peters et al, 2018; Devlin et al, 2019; Yang et al, 2019). Dels have been shown to perform remarkably well at syntactic grammaticality judgment tasks (Goldberg, 2019), and encode substantial amounts of syntax in their learned representations (Liu et al, 2019a; Tenney et al, 2019a,b; Hewitt and Manning, 2019; Jawahar et al, 2019) Success on these syntactic tasks has been achieved by Transformer architectures (Vaswani et al, 2017) that lack explicit notions of hierarchical syntactic structures. We work towards answering these questions by devising a new pretraining strategy that injects syntactic biases into a BERT (Devlin et al, 2019) learner that works well at scale We hypothesize that this approach can improve the competence of BERT on various tasks, which provides evidence for the benefits of syntactic biases in large-scale models

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call