Abstract

AbstractMultidimensional scoring evaluates each constructed‐response answer from more than one rating dimension and/or trait such as lexicon, organization, and supporting ideas instead of only one holistic score, to help students distinguish between various dimensions of writing quality. In this work, we present a bilevel learning model for combining two objectives, the multidimensional automated scoring, and the students’ writing structure analysis and interpretation. The dual objectives are enabled by a supervised model, called Latent Dirichlet Allocation Multitask Learning (LDAMTL), integrating a topic model and a multitask learning model with an attention mechanism. Two empirical data sets were employed to indicate LDAMTL model performance. On one hand, results suggested that LDAMTL owns better scoring and QW‐κ values than two other competitor models, the supervised latent Dirichlet allocation, and Bidirectional Encoder Representations from Transformers at the 5% significance level. On the other hand, extracted topic structures revealed that students with a higher language score tended to employ more compelling words to support the argument in their answers. This study suggested that LDAMTL not only demonstrates the model performance by conjugating the underlying shared representation of each topic and learned representation from the neural networks but also helps understand students’ writing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call