Abstract

Automated essay scoring (AES) involves using computer technology to grade written assessments and assigning a score based on their perceived quality. AES has been among the most significant Natural Language Processing (NLP) applications due to its educational and commercial value. Similar to many other NLP tasks, training a model for AES typically involves acquiring a substantial amount of labeled data specific to the essay being graded. This usually incurs a substantial cost. In this study, we consider two recent few-shot learning methods to enhance the predictive performance of machine learning methods for AES tasks. Specifically, we experiment with a prompt-based few-shot learning method, pattern exploiting training (PET), and a prompt-free few-shot learning strategy, SetFit, and compare these against vanilla fine-tuning. Our numerical study shows that PET can provide substantial performance gains over other methods, and it can effectively boost performance when access to labeled data is limited. On the other hand, PET is found to be the most computationally expensive few-shot learning method considered, whileSetFit is the fastest method among the approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call