Abstract
Abstract The question of context in neural machine translation often focuses on topics related to document-level translation or intersentential context. However, there is a wide range of other aspects that can be considered under the umbrella of context. In this work, we survey ways that researchers have incorporated context into neural machine translation systems and the evaluation thereof. This includes building translation systems that operate at the paragraph level or the document level or ones that translate at the sentence level but incorporate information from other sentences. We also consider how issues like terminology consistency, anaphora, and world knowledge or external information can be considered as types of context relevant to the task of machine translation and its evaluation. Closely tied to these topics is the question of how to best evaluate machine translation output in a way that is sensitive to the contexts in which it appears. To this end, we discuss work on incorporating context into both human and automatic evaluations of machine translation quality. Furthermore, we also discuss recent experiments in the field as they relate to the use of large language models in translation and evaluation. We conclude with a view of the future of machine translation, where we expect to see issues of context continue to come to the forefront.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.