Abstract

Abstract Meaning Representation (AMR) parsing has experienced a notable growth in performance in the last two years, due both to the impact of transfer learning and the development of novel architectures specific to AMR. At the same time, self-learning techniques have helped push the performance boundaries of other natural language processing applications, such as machine translation or question answering. In this paper, we explore different ways in which trained models can be applied to improve AMR parsing performance, including generation of synthetic text and AMR annotations as well as refinement of actions oracle. We show that, without any additional human annotations, these techniques improve an already performant parser and achieve state-of-the-art results on AMR 1.0 and AMR 2.0.

Highlights

  • Meaning Representation (AMR) are broad-coverage sentence-level semantic representations expressing who does what to whom

  • It is important to underline that for the presented methods we do not use additional human annotations throughout the experiments and that the only external source of data is additional text data for synthetic Abstract Meaning Representation (AMR), which we indicate with U

  • SynAMR provides the largest gain (0.7/0.8) for AMR1.0/AMR2.0 while synTxt provides close to half that gain (0.4/0.3). The combination of both methods yields an improvement over their individual scores, but only for AMR1.0 with a 0.9 improvement

Read more

Summary

Introduction

Abstract Meaning Representation (AMR) are broad-coverage sentence-level semantic representations expressing who does what to whom. Nodes in an AMR graph correspond to concepts such as entities or predicates and are not always directly related to words. (Zhang et al, 2019a) and more recently a highly performant graph-sequence iterative refinement model (Cai and Lam, 2020) and a hard-attention transition-based parser We explore the use of a trained parser to iteratively refine a rule-based AMR oracle We revisit silver data training (Konstas et al, 2017a) These techniques reach 77.3 and 80.7 Smatch (Cai and Knight, 2013) on AMR1.0 and AMR2.0 respectively using only gold data as well as 78.2 and 81.3 with silver data

Baseline Parser and Setup
Oracle Self-Training
Self-Training with Synthetic AMR
Comparison Background
Results
Related Works
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.