Abstract

We present TRANX, a transition-based neural semantic parser that maps natural language (NL) utterances into formal meaning representations (MRs). TRANX uses a transition system based on the abstract syntax description language for the target MR, which gives it two major advantages: (1) it is highly accurate, using information from the syntax of the target MR to constrain the output space and model the information flow, and (2) it is highly generalizable, and can easily be applied to new types of MR by just writing a new abstract syntax description corresponding to the allowable structures in the MR. Experiments on four different semantic parsing and code generation tasks show that our system is generalizable, extensible, and effective, registering strong results compared to existing neural semantic parsers.

Highlights

  • Semantic parsing is the task of transducing natural language (NL) utterances into formal meaning representations (MRs)

  • An earilier version is used in Yin et al (2018). Because of these varying formalisms for MRs, the design of semantic parsers, neural network-based ones has generally focused on a small subset of tasks — in order to ensure the syntactic well-formedness of generated MRs, a parser is usually designed to reflect the domain-dependent grammar of MRs in the structure of the model (Zhong et al, 2017; Xu et al, 2017)

  • Rabinovich et al (2017) propose the abstract syntax networks (ASNs), where domain-specific MRs are represented by abstract syntax trees (ASTs, Fig. 2 Left) specified under the abstract syntax description language (ASDL) framework (Wang et al, 1997)

Read more

Summary

Introduction

Semantic parsing is the task of transducing natural language (NL) utterances into formal meaning representations (MRs). For more task-driven approaches to semantic parsing, it is common for meaning representations to represent executable programs such as SQL queries (Zhong et al, 2017), robotic commands (Artzi and Zettlemoyer, 2013), smart phone instructions (Quirk et al, 2015), and even general-purpose programming languages like Python (Yin and Neubig, 2017; Rabinovich et al, 2017) and Java (Ling et al, 2016) Because of these varying formalisms for MRs, the design of semantic parsers, neural network-based ones has generally focused on a small subset of tasks — in order to ensure the syntactic well-formedness of generated MRs, a parser is usually designed to reflect the domain-dependent grammar of MRs in the structure of the model (Zhong et al, 2017; Xu et al, 2017). ASDL Grammar stmt ↦ Expr(expr value) expr ↦ Call(expr func, expr* args, keyword* keywords) | Attribute(expr value, identifier attr)

Methodology
Modeling ASTs using ASDL Grammar
Transition System
Experiments
Methods
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call