Abstract

AbstractTraditional automatic machine translation (MT) evaluation methods adopt the idea that calculates the similarity between machine translation output and human reference translations. However, in terms of the needs of many users, it is a key research issues to propose an evaluation method without references. As described in this paper, we propose a novel automatic MT evaluation method without human reference translations. Firstly, calculate average n-grams probability of source sentence with source language models, and similarly, calculate average n-grams probability of machine-translated sentence with target language models, finally, use the relative error of two average n-grams probabilities to mark machine-translated sentence. The experimental results show that our method can achieve high correlations with a few automatic MT evaluation metrics. The main contribution of this paper is that users can get MT evaluation reliability in the absence of reference translations, which greatly improving the utility of MT evaluation metrics.Keywordsmachine translation evaluationautomatic evaluationwithout reference translations

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call