Abstract

Most neural network translation models treat natural language sentences as a sequence of words, which ignores their intrinsic hierarchy, and especially lacks syntactic information. In this paper, the effect of syntactic information on translation is discussed, and a neural network translation model is proposed, which can utilize the syntactic-sensitive encoder-decoder framework of the source-display syntactic analysis tree. For encoders, we propose a bi-directional tree coding model, which adopts both serialization context and syntactic structure context information while learning source-side representation. At decoding phase, a tree coverage model is proposed to constrain the choice of the translated words by the source syntactic information when the decoder generates the translation. The experiments show that the proposed coding model can effectively enhance the original bottom-up tree encoder, and incorporate syntactic information into the decoder which can indeed better control the generation of translation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call