Abstract

Abstract In this paper we describe the use of QuEst, a framework that aims to obtain predictions on the quality of translations, to improve the performance of machine translation (MT) systems without changing their internal functioning. We apply QuEst to experiments with: i. multiple system translation ranking, where translations produced by different MT systems are ranked according to their estimated quality, leading to gains of up to 2:72 BLEU, 3:66 BLEUs, and 2:17 F1 points; ii. n-best list re-ranking, where n-best list translations produced by an MT system are reranked based on predicted quality scores to get the best translation ranked top, which lead to improvements on sentence NIST score by 0:41 points; iii. n-best list combination, where segments from an n-best list are combined using a latticebased re-scoring approach that minimize word error, obtaining gains of 0:28 BLEU points; and iv. the ITERPE strategy, which attempts to identify translation errors regardless of prediction errors (ITERPE) and build sentence-specific SMT systems (SSSS) on the ITERPE sorted instances identified as having more potential for improvement, achieving gains of up to 1:43 BLEU, 0:54 F1, 2:9 NIST, 0:64 sentence BLEU, and 4:7 sentence NIST points in English to German over the top 100 ITERPE sorted instances.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call