Abstract

The lack of reliable automatic evaluation metrics is a major impediment to the development of open-domain dialogue systems. Various reference-based metrics have been proposed to calculate a score between a predicted response and a small set of references. However, these metrics show unsatisfactory correlations with human judgments. For a reference-based metric, its reliability mainly depends on two factors: its ability to measure the similarity between the predicted response and the reference response, as well as the reliability of the given reference set. Yet, there are few discussions on the latter. Our work attempts to fill this vacancy. We first clarify an assumption on reference-based metrics that, if more high-quality references are added into the reference set, the reliability of the metric will increase. Next, we present REAM$\sharp$: an enhancement approach to Reference-based EvAluation Metrics for open-domain dialogue systems. A prediction model is designed to estimate the reliability of the given reference set. We show how its predicted results can be helpful to augment the reference set, and thus improve the reliability of the metric. Experiments validate both the effectiveness of our prediction model and that the reliability of reference-based metrics improves with the augmented reference sets.

Highlights

  • The lack of reliable automatic evaluation metrics is a major impediment to the development of opendomain dialogue systems (Li and Jurafsky, 2016; Gao et al, 2019; Li et al, 2020a)

  • The reliability scores predicted by the REAM (BERT) model will reveal to annotators in the first group (“HumanREAM (BS)”) for interactive annotation but not annotators in the second group (“Human”)

  • We first clarify an assumption on existing reference-based metrics that if more highquality reference responses are added to the reference set, it should have a higher correlation with human judgment

Read more

Summary

Introduction

The lack of reliable automatic evaluation metrics is a major impediment to the development of opendomain dialogue systems (Li and Jurafsky, 2016; Gao et al, 2019; Li et al, 2020a). Existing evaluation metrics for open-domain dialogue systems can be roughly divided into reference-based and referencefree metrics. Reference-based metrics usually measure how similar a generated response is to the reference responses. Reference-free metrics, on the other hand, measure the quality of a response without any reference and usually focus on specific aspects of the responses. Much work often computes the perplexity of a generated response as a measure of fluency (Li et al, 2020b), and adopts Dist-1/2 (Li et al, 2016b) to measure the diversity of the response.

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call