Abstract

In low-stakes assessments, test performance has few or no consequences for examinees themselves, so that examinees may not be fully engaged when answering the items. Instead of engaging in solution behaviour, disengaged examinees might randomly guess or generate no response at all. When ignored, examinee disengagement poses a severe threat to the validity of results obtained from low-stakes assessments. Statistical modelling approaches in educational measurement have been proposed that account for non-response or for guessing, but do not consider both types of disengaged behaviour simultaneously. We bring together research on modelling examinee engagement and research on missing values and present a hierarchical latent response model for identifying and modelling the processes associated with examinee disengagement jointly with the processes associated with engaged responses. To that end, we employ a mixture model that identifies disengagement at the item-by-examinee level by assuming different data-generating processes underlying item responses and omissions, respectively, as well as response times associated with engaged and disengaged behaviour. By modelling examinee engagement with a latent response framework, the model allows assessing how examinee engagement relates to ability and speed as well as to identify items that are likely to evoke disengaged test-taking behaviour. An illustration of the model by means of an application to real data is presented.

Highlights

  • The aim of large-scale assessments (LSAs) is to measure examinee competencies using test items

  • We evaluate the statistical performance of the proposed model, illustrate how it differs from current approaches for identifying examinee disengagement and handling item omissions, and illustrate its application employing data from the Programme for International Student Assessment (PISA) 2015

  • Conceptualizing disengaged test-taking behaviour in terms of both randomly guessing and omitting, we present a hierarchical latent response model for identifying and modelling the processes associated with examinee disengagement jointly with the processes associated with engaged responses

Read more

Summary

Introduction

The aim of large-scale assessments (LSAs) is to measure examinee competencies using test items. DOI:10.1111/bmsp.12188 applying their abilities, but instead proceed quickly through the assessment by randomly guessing on multiple-choice (MC) items, answering items with an open-response (OR) format only perfunctorily, or generating no response at all (Verbic & Tomic, 2009; Wise & Gao, 2017). Such disengaged test-taking behaviour poses a severe threat to the validity of results obtained from LSAs since test scores assumed to reflect the level of competency may be confounded with the level of disengagement (Braun, Kirsch, & Yamamoto, 2011). Identifying and understanding the processes associated with examinee disengagement is paramount for drawing valid inferences on examinee ability

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.