Abstract

Abstract Students who fail state writing tests may be subject to a number of negative consequences. Identifying students who are at risk of failure affords educators time to intervene and prevent such outcomes. Yet, little research has examined the classification accuracy of predictors used to identify at-risk students in the upper-elementary and middle-school grades. Hence, the current study compared multiple scoring methods with regards to their accuracy for identifying students at risk of failing a state writing test. In the fall of 2012, students composed a persuasive prompt in response to a computer-based benchmark writing test, and in the spring of 2013 they participated in the state writing assessment. Predictor measures included prior writing achievement, human holistic scoring, automated essay scoring via Project Essay Grade (PEG), total words written, compositional spelling, and sentence accuracy. Classification accuracy was measured using the area under the ROC curve. Results indicated that prior writing achievement and PEG Overall Score had the highest classification accuracy. A multivariate model combining these two measures resulted in only slight improvements over univariate prediction models. Study findings indicated that choice of scoring method affects classification accuracy, and automated essay scoring can be used to accurately identify at-risk students.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call