Abstract

Educational large-scale studies typically adopt highly standardized settings to collect cognitive data on large samples of respondents. Increasing costs alongside dwindling response rates in these studies necessitate exploring alternative assessment strategies such as unsupervised web-based testing. Before respective assessment modes can be implemented on a broad scale, their impact on cognitive measurements needs to be quantified. Therefore, an experimental study on N = 17,473 university students from the German National Educational Panel Study has been conducted. Respondents were randomly assigned to a supervised paper-based, a supervised computerized, and an unsupervised web-based mode to work on a test of scientific literacy. Mode-specific effects on selection bias, measurement bias, and predictive bias were examined. The results showed a higher response rate in web-based testing as compared to the supervised modes, without introducing a pronounced mode-specific selection bias. Analyses of differential test functioning showed systematically larger test scores in paper-based testing, particularly among low to medium ability respondents. Prediction bias for web-based testing was observed for one out of four criteria on study-related success factors. Overall, the results indicate that unsupervised web-based testing is not strictly equivalent to other assessment modes. However, the respective bias introduced by web-based testing was generally small. Thus, unsupervised web-based assessments seem to be a feasible option in cognitive large-scale studies in higher education.

Highlights

  • Educational large-scale studies typically adopt highly standardized settings to collect cognitive data on large samples of respondents

  • Students who were randomly assigned to the modes showed notably higher response rates in unstandardized and unsupervised webbased assessments (54.2%) as compared to standardized and supervised assessments: paper-based tests (PBA) (25.6%) and CBA (18.2%)

  • PBA and CBA non-responders that were switched to web-based tests (WBA) showed a response rate of 25.6%

Read more

Summary

Introduction

Educational large-scale studies typically adopt highly standardized settings to collect cognitive data on large samples of respondents. Behav Res (2021) 53:1202–1217 university students) can further endanger response rates in these studies because timely appointments for supervised testing cannot be reached (e.g., Haunberger, 2011; Kuhnimhof, Chlond and Zumkeller, 2006) To mitigate these challenges, web-based settings or mixed-mode designs adopting different data collection modes for different respondents have been considered (Al Baghal, 2019). Various disturbances such as background noise or other people being able to see the test taker’s responses can potentially further influence the test-taking behavior (see Gnambs and Kaspar, 2015, for respective evidence in the context of survey research) Technological differences such as different screen sizes or input devices (e.g., mouse versus touchscreen) might introduce further construct-irrelevant variance that could distort measurements in unsupervised webbased settings, when the assessment can be accessed using mobile and non-mobile devices (Brown and Grossenbacher, 2017). Predicting later life outcomes such as occupation based on cognitive measures is of particular interest in educational science where the attempt is made to relate both (e.g., Blossfeld, Schneider and Doll, 2009)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call