Abstract

It is hard to evaluate translations objectively and accurately, which limits the applications of machine translation. In this article, we assume that the above phenomenon is caused by noise interference during translation evaluation, and we handle the problem through a perspective of causal inference. We assume that the observable translation score is affected by the unobservable true translation quality and some noise simultaneously. If there is a variable that is related to the noise and independent to the true translation quality, the related noise can be eliminated by removing the effect of that variable from the observed score. Based on the above causality hypothesis, this article studies the length bias problem of beam search for neural machine translation (NMT) and the input related noise problem of translation quality estimation (QE). For the NMT length bias problem, we conduct the experiments on four typical NMT tasks (Uyghur–Chinese, Chinese–English, English–German, and English–French) with different scales of datasets. Comparing with previous approaches, the proposed causal motivated method is model-agnostic and does not require supervised training. For QE tasks, we conduct the experiments on the WMT’20 submissions. Experimental results show that the denoised QE results gain better Pearson’s correlation scores with human assessed scores compared to the original submissions. Further analyses on the NMT and QE tasks also demonstrate the rationality of the empirical assumptions made on our methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call