Abstract

There is an increasing focus on model-based dialog evaluation metrics such as ADEM, RUBER, and the more recent BERT-based metrics. These models aim to assign a high score to all relevant responses and a low score to all irrelevant responses. Ideally, such models should be trained using multiple relevant and irrelevant responses for any given context. However, no such data is publicly available, and hence existing models are usually trained using a single relevant response and multiple randomly selected responses from other contexts (random negatives). To allow for better training and robust evaluation of model-based metrics, we introduce the DailyDialog++ dataset, consisting of (i) five relevant responses for each context and (ii) five adversarially crafted irrelevant responses for each context. Using this dataset, we first show that even in the presence of multiple correct references, n-gram based metrics and embedding based metrics do not perform well at separating relevant responses from even random negatives. While model-based metrics perform better than n-gram and embedding based metrics on random negatives, their performance drops substantially when evaluated on adversarial examples. To check if large scale pretraining could help, we propose a new BERT-based evaluation metric called DEB, which is pretrained on 727M Reddit conversations and then finetuned on our dataset. DEB significantly outperforms existing models, showing better correlation with human judgments and better performance on random negatives (88.27% accuracy). However, its performance again drops substantially when evaluated on adversarial responses, thereby highlighting that even large-scale pretrained evaluation models are not robust to the adversarial examples in our dataset. The dataset 1 and code 2 are publicly available.

Highlights

  • Open-domain conversational systems are increasingly in demand for several applications ranging from personal digital assistants to entertainers for recreation

  • We compare the performance of different dialog evaluation metrics in separating relevant references from (i) random negatives (ii) synthetically crafted adversarial irrelevant responses and (iii) manually crafted adversarial irrelevant responses

  • We compute the Point Biserial correlation (PBC) between the scores assigned by a metric and the binary target i.e., a score of 1 for positive responses and 0 for random negative responses

Read more

Summary

Introduction

Open-domain conversational systems are increasingly in demand for several applications ranging from personal digital assistants to entertainers for recreation. Researchers have usually adopted n-gram based metrics (Papineni et al, 2002; Banerjee and Lavie, 2005; Lin, 2004) or embedding based metrics (Forgues et al, 2014; Rus and Lintean, 2012; Zhang et al, 2020a) to compare the model’s response with a single reference. These metrics assume that a valid response should be semantically or lexically similar to the reference without taking the context of the conversation into consideration. N-gram and word embedding based metrics, which rely on lexical and/or semantic match, correlate very weakly with human judgements for dialogue evaluation (Liu et al, 2016)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call