Abstract

AbstractWith the increased restrictions on physical distancing due to the COVID‐19 pandemic, remote proctoring has emerged as an alternative to traditional onsite proctoring to ensure the continuity of essential assessments, such as computer‐based medical licensing exams. Recent literature has highlighted the significant impact of different proctoring modalities on examinees’ test experience, including factors like response‐time data. However, the potential influence of these differences on test performance has remained unclear. One limitation in the current literature is the lack of a rigorous learning analytics framework to evaluate the comparability of computer‐based exams delivered using various proctoring settings. To address this gap, the current study aims to introduce a machine‐learning‐based framework that analyzes computer‐generated response‐time data to investigate the association between proctoring modalities in high‐stakes assessments. We demonstrated the effectiveness of this framework using empirical data collected from a large‐scale high‐stakes medical licensing exam conducted in Canada. By applying the machine‐learning‐based framework, we were able to extract examinee‐specific response‐time data for each proctoring modality and identify distinct time‐use patterns among examinees based on their proctoring modality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call