Abstract

Recent research has proven that syntactic knowledge is effective to improve the performance of neural machine translation (NMT). Most previous work focuses on leveraging either source or target syntax in the recurrent neural network (RNN) based encoder–decoder model. In this paper, we simultaneously use both source and target dependency tree to improve the NMT model. First, we propose a simple but effective syntax-aware encoder to incorporate source dependency tree into NMT. The new encoder enriches each source state with dependence relations from the tree. Then, we propose a novel sequence-to-dependence framework. In this framework, the target translation and its corresponding dependence tree are jointly constructed and modeled. During decoding, the tree structure is used as context to facilitate word generations. Finally, we extend the sequence-to-dependence framework with the syntax-aware encoder to build a dependence-NMT model and apply the dependence-based framework to the Transformer. Experimental results on several translation tasks show that both source and target dependence structures can improve the translation quality and their effects can be accumulated.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.