Abstract

E-learning is increasingly used to support student learning in higher education. This results in huge amounts of item response data containing valuable information about students’ strengths and weaknesses that can be used to provide effective feedback to both students and teachers. However, in current practice, feedback in e-learning is often given in the form of a simple proportion of correctly solved items rather than diagnostic, actionable feedback. Diagnostic classification models (DCMs) provide opportunities to model the item response data from formative assessments in online learning environments and to obtain diagnostic information to improve teaching and learning. This simulation study explores the demands on the data structure (i.e., assessment length, respondent sample size) to apply log-linear DCMs to empirical data. Thereby we provide guidance to educational practitioners on how many items need to be administered to how many students in order to accurately assess skills at different levels of specificity using DCMs. In addition, effects of misspecification of the dimensionality of the assessed skills on model fit indices are explored. Results show that detecting these misspecifications statistically with DCMs can be problematic. Recommendations and implications for educational practice are discussed.

Highlights

  • Feedback in education is a powerful tool to enhance student learning (Black and Wiliam, 1998; Nicol and Macfarlane-Dick, 2006)

  • By means of a simulation study, we explore whether it is feasible to use loglinear Diagnostic classification models (DCMs) to obtain diagnostic information about students based on e-learning data with respondent sample sizes common in this domain

  • We provided minimum respondent sample sizes that are necessary to 1) obtain high accuracy in classifying students as master or nonmaster of different numbers of attributes based on assessments of different lengths, and 2) allow for adequate evaluation of model fit with the M2 and RMSEA2 statistics

Read more

Summary

Introduction

Feedback in education is a powerful tool to enhance student learning (Black and Wiliam, 1998; Nicol and Macfarlane-Dick, 2006). The use of e-learning environments to support learning processes in higher education has emerged over the last decades and the widespread use of these new technologies led to the availability of huge amounts of student data. This data can be a rich source of information for personalized teaching and learning, in itself it does not lead to quality improvement of education. E, feedback that can be acted upon This requires the data to be transformed into meaningful information using learning analytical approaches in order to provide both students and teachers with knowledge about learning progress (Ferguson, 2012). Despite the opportunities the emerging field of learning analytics research entails and the available expertise at universities, large-scale adoption of learning analytics to improve teaching and learning in higher education is lacking (Viberg et al, 2018)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call