Abstract

The aim of this article is to report on recent findings concerning the use of Google Translate outputs in multimodal contexts. Development and evaluation of machine translation often focus on verbal mode, but accounts by the area on the exploration text-image relations in multimodal documents translated automatically are rare. Thus, this work seeks to describe just what are such relations and how to describe them, organized in two parts: firstly, by exploring the problem through an interdisciplinary interface, involving Machine Translation and Multimodality to analyze some examples from the Wikihow website; secondly, by reporting on recent investigation on suitable tools and methods to properly annotate these issues from within a long-term purpose to assemble a corpus. Finally, this article provides a discussion on the findings, including some limitations and perspectives for future research.

Highlights

  • Since the popularization of computers in the 1980s and the widespread use of the internet that started in the 1990s (Hutchins, Machine Translation: a concise), there has been a shift both in the way the population uses technology and the way they read (Saçak, 14)

  • Reading has been mediated by templates and cognitively sophisticated algorithms, a scenario that affects the contemporary reader of the digital era. In such globalized informational contexts, readers have been increasingly demanding more automatic translation (Quah), for a wider variety of documents containing illustration, videos, infographics, emoticons, and photographs, all working in cohesive orchestration to build a coherent “multimodal document,” such as webpages, manuals, and news articles (Bateman, Multimodality)

  • The last subsection presents analysis of tools for annotating intersemiotic mismatches generated by errors in machine translation outputs

Read more

Summary

Introduction

Since the popularization of computers in the 1980s and the widespread use of the internet that started in the 1990s (Hutchins, Machine Translation: a concise), there has been a shift both in the way the population uses technology and the way they read (Saçak, 14). Reading has been mediated by templates and cognitively sophisticated algorithms, a scenario that affects the contemporary reader of the digital era In such globalized informational contexts, readers have been increasingly demanding more automatic translation (Quah), for a wider variety of documents containing illustration, videos, infographics, emoticons, and photographs, all working in cohesive orchestration to build a coherent “multimodal document,” such as webpages, manuals, and news articles (Bateman, Multimodality). In the past few years, studies conducted within a Machine Translation and Multimodality interface have started to grow. They adopt an engineering perspective, testing the validity of multimodality to improve the precision of machine translation.

Objectives
Machine translation output classification
Intersemiotic texture
Intersemiotic mismatches in webpages translated automatically
Tools for intersemiotic mismatch analysis
Final remarks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call