Abstract

ABSTRACT The present study used process data from a computer-based problem-solving task as indications of behavioural level of test-taking effort, and explored how behavioural item-level effort related to overall test performance and self-reported effort. Variables were extracted from raw process data and clustered. Four distinct clusters were obtained and characterised as high effort, medium effort, low effort, and planner. Regression modelling indicated that among students that failed to solve the task, level of effort invested before giving up positively predicted overall test performance. Among students that solved the task, level of effort was instead weakly negatively related to test performance. A low level of behavioural effort before giving up the task was also related to lower self-reported effort. Results suggest that effort invested before giving up provides information about test-takers’ motivation to spend effort on the test. We conclude that process data could augment existing methods of assessing test-taking effort.

Highlights

  • Items in large-scale assessments are often scored by assigning an integer value of 0 or 1 to a fixed or open response

  • One key finding from the present study is that response process data from human–computer interactions can be used to shed further light on the complex relationship between test-taking effort and test-taking motivation

  • Exploring these data can yield both theoretical and methodological contributions. Another key finding is that within-item behaviours can be relevant to consider alongside other measures of test-taking effort such as response times and self-reports

Read more

Summary

Introduction

Items in large-scale assessments are often scored by assigning an integer value of 0 or 1 to a fixed or open response. With computer-based assessments comes the possibility to trace human–computer interactions, which can give a glimpse of what was done between item presentation and answer selection. These digital traces have been referred to as a kind of process data, and can be used as a source of validity evidence based on response processes (American Educational Research Association et al, 2014; Goldhammer et al, 2017). One area that has received less attention from research on process data is testtaking effort, which we aim to explore in this study by examining test-takers’ within-item behaviours in a Programme for International Student Assessment (PISA) 2012 problem-solving item

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call