Abstract

Semantic understanding is an essential research issue for many applications, such as social network analysis, collective intelligence and content computing, which tells the inner meaning of language form. Recently, Abstract Meaning Representation (AMR) is attracted by many researchers for its semantic representation ability on an entire sentence. However, due to the non-projectivity and reentrancy properties of AMR graphs, they lose some important semantic information in parsing from sentences. In this paper, we propose a general AMR parsing model which utilizes a two-stack-based transition algorithm for both Chinese and English datasets. It can incrementally parse sentences to AMR graphs in linear time. Experimental results demonstrate that it is superior in recovering reentrancy and handling arcs while is competitive with other transition-based neural network models on both English and Chinese datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call