Abstract

Transition-based approaches based on local classification are attractive for dependency parsing due to their simplicity and speed, despite producing results slightly below the state-of-the-art. In this paper, we propose a new approach for approximate structured inference for transition-based parsing that produces scores suitable for global scoring using local models. This is accomplished with the introduction of error states in local training, which add information about incorrect derivation paths typically left out completely in locally-trained models. Using neural networks for our local classifiers, our approach achieves 93.61% accuracy for transition-based dependency parsing in English.

Highlights

  • Transition-based parsing approaches based on local classification of parser actions (Nivre, 2008) remain attractive due to their simplicity, despite producing results slightly below the state-of-the-art

  • We presented a new approach for approximate structured inference for transition-based parsing that allows us to obtain high parsing accuracy using neural networks

  • We improved search by producing scores suitable for global scoring using only local models, and showed that our approach is competitive with the structured perceptron in transition-based parsing

Read more

Summary

Introduction

Transition-based parsing approaches based on local classification of parser actions (Nivre, 2008) remain attractive due to their simplicity, despite producing results slightly below the state-of-the-art. We propose a novel approach for approximate structured inference for transition-based parsing that uses locally-trained neural networks that, unlike previous local classification approaches, produce scores suitable for global scoring. This is accomplished with the introduction of error states in local training, which add information about incorrect derivation paths typically left out completely in locally-trained models. Our approach produces high accuracy for transition-based dependency parsing in English, surpassing parsers based on the structured perceptron (Huang and Sagae, 2010; Zhang and Nivre, 2011) by allowing seamless integration of pre-trained word embeddings, while requiring nearly none of the feature engineering typically associated with parsing with linear models. Our experiments show that naive search produces very limited improvements in accuracy compared to greedy inference, while search in conjunction with error states that mark incorrect derivations produces substantial accuracy improvements

Background
Arc-Standard Dependency Parsing
Local Classification
Structured Perceptron
Parsing with Local Classifiers and Error States
Training Local Classifiers with Error States
Parsing with Error States
Neural Models for Transition Based Parsing
Semi-supervised Learning
Vanilla Arc-standard Parsers
Error State Parsers
Model and Parser Selection
Results
Related Work
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call