Abstract

Translation quality and its evaluation play a crucial role in the field of machine translation (MT). This paper focuses on the quality assessment of automatic metrics for MT evaluation. In our study we assess the reliability and validity of the following automatic metrics: Position-independent Error Rate (PER), Word Error Rate (WER) and Cover Disjoint Error Rate (CDER). These metrics define an error rate of MT output and also of MT system itself, in our case it is an on-line statistical MT system. The results of the reliability analysis showed that these automatic metrics for MT evaluation are reliable and valid, whereby the validity and reliability were verified for one translation direction: from the minority language (Slovak) into English.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call