Abstract

Most of the unsupervised dependency parsers are based on probabilistic generative models that learn the joint distribution of the given sentence and its parse. Probabilistic generative models usually explicit decompose the desired dependency tree into factorized grammar rules, which lack the global features of the entire sentence. In this paper, we propose a novel probabilistic model called discriminative neural dependency model with valence (D-NDMV) that generates a sentence and its parse from a continuous latent representation, which encodes global contextual information of the generated sentence. We propose two approaches to model the latent representation: the first deterministically summarizes the representation from the sentence and the second probabilistically models the representation conditioned on the sentence. Our approach can be regarded as a new type of autoencoder model to unsupervised dependency parsing that combines the benefits of both generative and discriminative techniques. In particular, our approach breaks the context-free independence assumption in previous generative approaches and therefore becomes more expressive. Our extensive experimental results on seventeen datasets from various sources show that our approach achieves competitive accuracy compared with both generative and discriminative state-of-the-art unsupervised dependency parsers.

Highlights

  • Dependency parsing is a very important task in natural language processing

  • We first compare our model with two generative models: NDMV and left corner Dependency Model with Valence (DMV) (LC-DMV) (Noji et al, 2016)

  • We use deterministic variant of dependency model with valence (D-NDMV) to conduct the following analysis. deterministic variant of D-NDMV performs similar to deterministic variant of D-NDMV

Read more

Summary

Introduction

Dependency parsing is a very important task in natural language processing. The dependency relations identified by dependency parsing convey syntactic information useful in subsequent applications such as semantic parsing, information extraction, and question answering. Most previous approaches to unsupervised dependency parsing are based on probabilistic generative models, for example, the Dependency Model with Valence (DMV) (Klein and Manning, 2004) and its extensions (Cohen and Smith, 2009; Headden III et al, 2009; Cohen and Smith, 2010; BergKirkpatrick et al, 2010; Gillenwater et al, 2010; Jiang et al, 2016). A disadvantage of such approaches comes from the context-freeness of dependency grammars, a strong independence assumption that limits the information available in determining how likely a dependency is between two words in a sentence. Additional information used for computing dependency probabilities in later work is limited to local morpho-syntactic features such as word forms, lemmas and categories (Berg-Kirkpatrick et al, 2010), which does not break the context-free assumption

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.