Abstract
We propose a multimodal neural machine translation (MNMT) method with semantic image regions called region-attentive multimodal neural machine translation (RA-NMT). Existing studies on MNMT have mainly focused on employing global visual features or equally sized grid local visual features extracted by convolutional neural networks (CNNs) to improve translation performance. However, they neglect the effect of semantic information captured inside the visual features. This study utilizes semantic image regions extracted by object detection for MNMT and integrates visual and textual features using two modality-dependent attention mechanisms. The proposed method was implemented and verified on two neural architectures of neural machine translation (NMT): recurrent neural network (RNN) and self-attention network (SAN). Experimental results on different language pairs of Multi30k dataset show that our proposed method improves over baselines and outperforms most of the state-of-the-art MNMT methods. Further analysis demonstrates that the proposed method can achieve better translation performance because of its better visual feature use.
Highlights
Neural machine translation (NMT) has achieved state-of-the-art translation performance [43,16,3,45]
Inspired by previous studies [8,13,37,17] on the investigation of the attention mechanism for multi-source learning, we introduce that a region-dependent attention mechanism is a promising way to make Multimodal NMT (MNMT) attend to the salient regions of an image
This demonstrates that the proposed method is universal, which can result in consistent improvements in performance on different NMT architectures
Summary
Neural machine translation (NMT) has achieved state-of-the-art translation performance [43,16,3,45]. The strength of NMT lies in its ability to learn directly, in an end-to-end fashion, mapping from the input text to the associated output text. In the context of recurrent neural network (RNN), it was proposed to use internal state (memory) to process variable-length sequences of inputs, that is much better at capturing long-term dependencies [3]. In the context of self-attention network (SAN), a special attention mechanism was proposed for selecting specific parts of an input sequence by relating its elements at different positions, dispensing with recurrence entirely [45]. Many studies [42,21,4] have increasingly been focusing on incorporating visual input, images, to improve translation
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.