Abstract

There has always been criticism for using n-gram based similarity metrics, such as BLEU, NIST, etc, for evaluating the performance of NLG systems. However, these metrics continue to remain popular and are recently being used for evaluating the performance of systems which automatically generate questions from documents, knowledge graphs, images, etc. Given the rising interest in such automatic question generation (AQG) systems, it is important to objectively examine whether these metrics are suitable for this task. In particular, it is important to verify whether such metrics used for evaluating AQG systems focus on answerability of the generated question by preferring questions which contain all relevant information such as question type (Wh-types), entities, relations, etc. In this work, we show that current automatic evaluation metrics based on n-gram similarity do not always correlate well with human judgments about answerability of a question. To alleviate this problem and as a first step towards better evaluation metrics for AQG, we introduce a scoring function to capture answerability and show that when this scoring function is integrated with existing metrics, they correlate significantly better with human judgments. The scripts and data developed as a part of this work are made publicly available.

Highlights

  • This work is a first step in that direction where we propose that apart from n-gram similarity, any metric for Automatic Question Generation (AQG) should take into account the answerability of the generated questions

  • Our work is a first step in this direction, and we hope it will lead to more research in designing the right metrics for AQG

  • We took noisy generated questions from three different tasks, viz., document Question Answering (QA), knowledge base QA and visual QA, and showed that the answerability scores assigned by humans did not correlate well with existing metrics

Read more

Summary

Introduction

The advent of large scale datasets for document Question Answering (QA) (Rajpurkar et al, 2016; Nguyen et al, 2016; Joshi et al, 2017; Saha et al, 2018a) knowledge base driven QA (Bordes et al, 2015; Saha et al, 2018b) and Visual QA (Antol et al, 2015; Johnson et al, 2017) has enabled the development of end-to-end supervised models for1https://github.com/PrekshaNema25/ Answerability-MetricDocument: In 1648 before the term “genocide” had been coined , the Peace of Westphalia was established to protect ethnic, racial and in some instances religious groups. Creating newer datasets for specific domains or augmenting existing datasets with more data is a tedious, time-consuming and expensive process. To alleviate this problem and create even more training data, there is growing interest in developing techniques that can automatically generate questions from a given source, say a document (Du et al, 2017; Du and Cardie, 2017), knowledge base (Reddy et al, 2017; Serban et al, 2016), or image (Li et al, 2017).

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.