Abstract

Multi-modal neural machine translation (MNMT) aims to integrate visual and textual information to translate source sentences into target and attracts a lot of attentions. Existing methods contribute a lot for capturing interactions between visual and textual features to improve the performance of neural machine translation (NMT). However, most of them don't consider the multi-modal consistency for MNMT. In fact, the image provides the semantic global consistency between different languages. We believe that adding bilingual-visual agreement into the encoder and decoder simultaneously can obtain bilingual representations and is useful for NMT. In this paper, we propose to simultaneously integrate visual information in encoder and decoder to learn the interactions between visual and textual features, in this paper called the model VMNMT. As the visual information provide global context, the encoder and decoder can learn the bilingual representations. Besides, we introduce a new bilingual-visual agreement decoder to learn to better representations of corresponding image-sentence pairs. In the experiment, the improvement was 2.02 BLEU on the English-German16 dataset and 1.9 BLEU on the English-German17 dataset. The results show that our method can outperform baselines on several widely-used datasets in terms of various metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call