Abstract

Traditional neural machine translation (NMT) methods use the word-level context to predict target language translation while neglecting the sentence-level context, which has been shown to be beneficial for translation prediction in statistical machine translation. This paper represents the sentence-level context as latent topic representations by using a convolution neural network, and designs a topic attention to integrate source sentence-level topic context information into both attention-based and Transformer-based NMT. In particular, our method can improve the performance of NMT by modeling source topics and translations jointly. Experiments on the large-scale LDC Chinese-to-English translation tasks and WMT'14 English-to-German translation tasks show that the proposed approach can achieve significant improvements compared with baseline systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.