Abstract

This research identifies different controlled English (CE) norms to be followed in technical writing for a variety of purposes and for different machine translation (MT) systems. The results of the investigation show that CE norms for MT application are stricter than those for communicative reading. The primary inference here is that human beings can interpret the meanings of polysemous words, pronouns, prepositional phrases based on the context and easily detect the misspellings, but MT systems fail to do so. In addition, a comparison of CE norms for the application of two MT systems indicates that the corpus-based Google MT is less constrained than rule-based TransWhiz in the lexical area. This phenomenon is attributable to the selection of a highly probabilistic module as the semantic scoring preference for the suggested translation provided by Google MT, not word-for-word translation by TransWhiz. In contrast, Google MT is more constrained than TransWhiz in the syntactic area. The inference is that TransWhiz parses syntactic constructions and transfers the parsing result based on grammatical rules stored in the MT system, so it may modify the original word sequence to make the translation conform to linguistic patterns in the target language. Contrary to this, Google MT depends on fuzzy or exact matches statistically retrieved from the labeled corpus. If no matches can be found, syntactically inappropriate translations will be produced. Seen in this regard, CE norms are never fixed and have to be modified through the evolution of time and MT technology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call