Abstract

Machine Reading Comprehension (MRC) is a challenging task in Natural Language Processing (NLP) field. It is a fast evolving field in the recent years because of the development of large-scale datasets and the deep learning technology. Even though, numerous MRC models have been developed for Question Answer (QA) system, the existing MRC models are not on par with reading comprehension. In order to assess and improve the performance of the projected datasets, an advanced level of evaluation metrics have been introduced. Therefore, this work has conducted a survey over the existing MRC models and evaluation metrics utilized in them to address the lack of MRC with human reading.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call