Abstract

Neural dependency parsing has proven very effective, achieving state-of-the-art results on numerous domains and languages. Unfortunately, it requires large amounts of labeled data, which is costly and laborious to create. In this paper we propose a self-training algorithm that alleviates this annotation bottleneck by training a parser on its own output. Our Deep Contextualized Self-training (DCST) algorithm utilizes representation models trained on sequence labeling tasks that are derived from the parser’s output when applied to unlabeled data, and integrates these models with the base parser through a gating mechanism. We conduct experiments across multiple languages, both in low resource in-domain and in cross-domain setups, and demonstrate that DCST substantially outperforms traditional self-training as well as recent semi-supervised training methods. 1

Highlights

  • Deep neural networks (DNNs) have improved the state-of-the-art in a variety of NLP tasks

  • In extensive lightly supervised and cross-domain experiments with a neural dependency parser, we show that our Deep Contextualized Self-training (DCST) algorithm outperforms traditional self-training and cross-view training approach (CVT)

  • Che et al (2018), who trained their ELMo model on the unlabeled data of the CoNLL 2018 shared task, reported that "The training of ELMo on one language takes roughly 3 days on an NVIDIA P100 GPU." we demonstrate the power of our models when ELMo embeddings are available (§8), in order to establish the added impact of deep contextualized self-training on top of contextualized word embeddings

Read more

Summary

Introduction

Deep neural networks (DNNs) have improved the state-of-the-art in a variety of NLP tasks. These include dependency parsing (Dozat and Manning, 2017), semantic parsing (Hershcovich et al, 2017), named entity recognition (Yadav and Bethard, 2018), part of speech (POS) tagging (Plank and Agic, 2018), and machine translation (Vaswani et al, 2017), among others. DNNs rely on in-domain labeled training data, which is costly and laborious to achieve This annotation bottleneck limits the applicability of NLP technology to a small number of languages and domains. The input to the parser is a sentence

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call