Large-scale assessments play a key role in education: educators and stakeholders need to know what students know and can do, so that they can be prepared for education policies and interventions in teaching and learning. However, a score from the assessment may not be enough—educators need to know why students got low scores, how students engaged with the tasks and the assessment, and how students with different levels of skills worked through the assessment. Process data, combined with response data, reflect students’ test-taking processes and can provide educators such rich information, but manually labelling the complex data is hard to scale for large-scale assessments. From scratch, we leveraged machine learning techniques (including supervised, unsupervised, and active learning) and experimented with a general human-centred AI approach to help subject matter experts efficiently and effectively make sense of big data (including students’ interaction sequences with the digital assessment platform, such as response, timing, and tool use sequences) to provide process profiles, that is, a holistic view of students’ entire test-taking processes on the assessment, so that performance can be viewed in context. Process profiles may help identify different sources for low performance and help generate rich feedback to educators and policy makers. The released National Assessment of Educational Progress (NAEP) Grade 8 mathematics data were used to illustrate our proposed approach.
Read full abstract