Abstract

Automated Writing Evaluation systems have been developed to help students improve their writing skills through the automated delivery of both summative and formative feedback. These systems have demonstrated strong potential in a variety of educational contexts; however, they remain limited in their personalization and scope. The purpose of the current study was to begin to address this gap by examining whether individual differences could be modeled in a source-based writing context. Undergraduate students (n=106) wrote essays in response to multiple sources and then completed an assessment of their vocabulary knowledge. Natural language processing tools were used to characterize the linguistic properties of the source-based essays at four levels: descriptive, lexical, syntax, and cohesion. Finally, machine learning models were used to predict students’ vocabulary scores from these linguistic features. The models accounted for approximately 29% of the variance in vocabulary scores, suggesting that the linguistic features of source-based essays are reflective of individual differences in vocabulary knowledge. Overall, this work suggests that automated text analyses can help to understand the role of individual differences in the writing process, which may ultimately help to improve personalization in computer-based learning environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call