Abstract

In neural machine translation (NMT), there is a natural correspondence between source and target sentences. The traditional NMT method does not explicitly model the translation agreement on sentence-level. In this article, we propose a comprehensive and novel sentence-level agreement architecture to alleviate this problem. It directly minimizes the difference between the representations of the source-side and target-side sentence on sentence-level. First, we compare a variety of sentence representation strategies and propose a “Gated Sum” sentence representation to achieve better sentence semantic information. Then, rather than a single-layer sentence-level agreement architecture, we further propose a multi-layer sentence agreement architecture to make the source and target semantic spaces closer layer by layer. The proposed agreement module can be integrated into NMT as an additional training objective function, and can also be used to enhance the representation of the source-side sentences. Experiments on the NIST Chinese-to-English and the WMT English-to-German translation tasks show that the proposed agreement architecture achieves significant improvements over state-of-the-art baselines, demonstrating the effectiveness and necessity of exploiting sentence-level agreement for NMT.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.