Abstract
Word alignment models are used to generate word-aligned parallel text which is used in statistical machine translation systems. Currently, the most popular word alignment models are IBM models which have been widely applied in a large number of translation systems. The parameters of IBM models are estimated by using Maximum Likelihood principle, i.e. by counting the co-occurrence of words in the parallel text. This way of parameter estimation leads to the “ambiguity” problem when some words stand together in many sentence pairs but each of them is not translation of any other. Additionally, this method requires large amount of training data to achieve good results. However, parallel text which is used to train the IBM models is usually limited for low-resource languages. In this work, we try to solve these two problems by adding semantic information to the models. Our semantic information is derived from word embeddings which only need monolingual data to train. We deploy evaluation on a language pair that has great differences in grammar structure, English-Vietnamese. Even with this challenged task, our proposed models gain significant improvements in word alignment result and help increasing translation quality.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.