Abstract

INTRODUCTION Expert human translation still surpasses the best results of machine translation (MT) systems (Bar-Hillel, 2003), but it is often hard to schedule an interpreter at the spur of the moment, especially for relatively obscure languages. Several free, fully automatic, Web-based translation services are available to fill this need but at the expense of lower accuracy. However, many translations do not need to be perfect. For example, a reader of a Web page or an email message written in a foreign language might need to get only the gist of the passage before deciding whether more detailed, human translation is needed or the content is not important enough to proceed further with it. That is, poor accuracy quickly can have greater value than higher accuracy that is too late (Muegge, 2006). As a result, more words are now translated per year using MT than are translated by human translators, and the demand continues to grow (LISA, 2009). Few studies have been conducted on the relative accuracies of these Web-based services, however. The purpose of this paper is to provide a performance overview of four leading MT systems provided on the Web and to further assess the accuracy of the best. Prior Studies of Web-Based MT Systems Machine translation was first proposed in 1947, and the first demonstration of a translation system was in January 1954 (Hutchins, 2003). MT became available for personal computers in 1981, and in 1997, Babel Fish (using SYSTRAN) appeared as the first, free, translation service on the World Wide Web (Yang & Lange, 1998). Although several evaluation studies have been conducted on MT systems (e.g., NIST, 2008), based upon an extensive review of the literature, only a few have focused solely upon Web-based versions. For example, four have tested the accuracy of SYSTRAN (originally provided at http://babelfish.altavista.com/babelfish-now: http://babelfish.yahoo.com/): Study 1 (Aiken, Rebman, Vanjani, & Robbins, 2002): In one of the earliest studies of a Web-based MT system, four participants used SYSTRAN to automatically translate German, French, and English comments in an electronic meeting. After the meeting, two objective reviewers judged the overall accuracy of the translations to be about 50% while the understanding accuracy was about 95%. Study 2 (Aiken, Vanjani, & Wong, 2006): In another study, a group of 92 undergraduate students evaluated SYSTRAN translations of 12 Spanish text samples to English, and they were not able to understand only two of the 12 translations (83% accuracy). No significant differences in understandability were found based on gender, but those who reported understanding some Spanish were able to understand many of the translations to English better. Further, the accuracy did not seem to correlate with the complexity of the sentences. Study 3 (Yates, 2006): In a third study, 20 sentences (10 Spanish, 10 German) selected from Mexican and German civil codes and press releases from foreign ministries were translated to English with SYSTRAN, and the author evaluated the samples' accuracies. The system's performance was rated as poor, but it was not uniformly poor, i.e., German texts were translated less poorly than the Spanish ones. Study 4 (Ablanedo, Aiken, & Vanjani, 2007): In a final study, 10 English text samples were translated by an expert and an intermediate-level Spanish translator as well as SYSTRAN. The most fluent human was 100% accurate, and the other achieved 80% accuracy. The MT system achieved only 70% accuracy but was 195 times faster than the humans. All of these tests were based upon SYSTRAN, the system deemed most reliable at the time of the studies. However, new translation software on Google appeared in October 2007. Abandoning the rule-based algorithms of SYSTRAN which the site had used previously, Google Translate (http://translate. …

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call