Abstract

Sentence alignment is a basic task in natural language processing which aims to extract high-quality parallel sentences automatically. Motivated by the observation that aligned sentence pairs contain a larger number of aligned words than unaligned ones, we treat word translation as one of the most useful external knowledge. In this paper, we show how to explicitly integrate word translation into neural sentence alignment. Specifically, this paper proposes three cross-lingual encoders to incorporate word translation: 1) Mixed Encoder that learns words and their translation annotation vectors over sequences where words and their translations are mixed alternatively; 2) Factored Encoder that views word translations as features and encodes words and their translations by concatenating their embeddings; and 3) Gated Encoder that uses gate mechanism to selectively control the amount of word translations moving forward. Experimentation on NIST MT and Opensubtitles Chinese-English datasets on both non-monotonicity and monotonicity scenarios demonstrates that all the proposed encoders significantly improve sentence alignment performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call