Abstract

The aim of this article is to report on recent findings concerning the use of Google Translate outputs in multimodal contexts. Development and evaluation of machine translation often focus on verbal mode, but accounts by the area on the exploration text-image relations in multimodal documents translated automatically are rare. Thus, this work seeks to describe just what are such relations and how to describe them, organized in two parts: firstly, by exploring the problem through an interdisciplinary interface, involving Machine Translation and Multimodality to analyze some examples from the Wikihow website; secondly, by reporting on recent investigation on suitable tools and methods to properly annotate these issues from within a long-term purpose to assemble a corpus. Finally, this article provides a discussion on the findings, including some limitations and perspectives for future research.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call