Abstract

Semantic parsing has emerged as a powerful paradigm for natural language interface and question answering systems. Traditional methods of building a semantic parser rely on high-quality lexicons, hand-crafted grammars and linguistic features which are limited by applied domain or representation. In this paper, we propose an approach to learn from denotations based on the Seq2Seq model augmented with attention mechanism. We encode input sequence into vectors and use dynamic programming to infer candidate logical forms. We utilize the fact that similar utterances should have similar logical forms to help reduce the searching space. Through learning mechanism of the Seq2Seq model, we can learn mappings gradually with noises. Curriculum learning is adopted to make the learning smoother. We test our model on a small arithmetic domain which shows our model can successfully infer the correct logical forms and learn a meaningful semantic parser.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call