Abstract

Inference plays a key role in reading comprehension. However, assessing inference in reading is a complex process that relies on the judgment of trained experts. In this study, we explore objective and automated methods for assessing inference in readers’ responses using natural language processing. Specifically, classifiers were trained to detect inference from a pair of input texts and reader responses by fine-tuning three widely used pre-trained language models. The effects of the model size and pre-training strategy on the accuracy of inference classification were investigated. The highest F1 score of 0.92 was achieved via fine-tuning the robustly optimized 12-layer BERT model (RoBERTa-base). Fine-tuning the larger 24-layer model (RoBERTa-large) did not improve the classification accuracy. Error analysis provides insight into the relative difficulty of classifying inference subtypes. The proposed method demonstrates the feasibility of the automated quantification of inference during reading, and offers potential to facilitate individualized reading instructions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call