Abstract

Abstract meaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclic graphs. AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences. We introduce a neural parser which treats alignments as latent variables within a joint probabilistic model of concepts, relations and alignments. As exact inference requires marginalizing over alignments and is infeasible, we use the variational autoencoding framework and a continuous relaxation of the discrete alignments. We show that joint modeling is preferable to using a pipeline of align and parse. The parser achieves the best reported results on the standard benchmark (74.4% on LDC2016E25).

Highlights

  • Abstract meaning representations (AMRs) (Banarescu et al, 2013) are broad-coverage sentencelevel semantic representations

  • As AMR abstracts away from details of surface realization, it is potentially beneficial in many semantic related NLP tasks, including text summarization (Liu et al, 2015; Dohare and Karnick, 2017), machine translation (Jones et al, 2012) and question answering (Mitra and Baral, 2016)

  • AMR banks are a lot smaller than parallel corpora used in machine translation (MT) and it is important to inject a useful inductive bias

Read more

Summary

Introduction

Abstract meaning representations (AMRs) (Banarescu et al, 2013) are broad-coverage sentencelevel semantic representations. As AMR abstracts away from details of surface realization, it is potentially beneficial in many semantic related NLP tasks, including text summarization (Liu et al, 2015; Dohare and Karnick, 2017), machine translation (Jones et al, 2012) and question answering (Mitra and Baral, 2016). One distinctive aspect of AMR annotation is the lack of explicit alignments between nodes in the graph (concepts) and words in the sentences. Though this arguably simplified the annotation process (Banarescu et al, 2013), it is not straightforward to produce an effective parser without relying on an alignment. The aligners are not directly informed by the AMR parsing objective and may produce alignments suboptimal for this task

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.