Abstract

Objective The purpose of the present research is to establish measurement equivalence and test differences in reliability between computerized and pencil-and-paper-based tests of spatial cognition. Background Researchers have increasingly adopted computerized test formats, but few attempt to establish equivalence for computer-based and paper-based tests. The mixed results in the literature on the test mode effect, which occurs when performance differs as a function of test medium, highlight the need to test for, instead of assume, measurement equivalence. One domain that has been increasingly computerized and is thus in need of tests of measurement equivalence across test mode is spatial cognition. Method In the present study, 244 undergraduate students completed two measures of spatial ability (i.e., spatial visualization and cross-sectioning) in either computer- or paper-and-pencil-based format. Results Measurement equivalence was not supported across computer-based and paper-based formats for either spatial test. The results also indicated that test administration type affected the types of errors made on the spatial visualization task, which further highlights the conceptual differences between test mediums. Paper-based tests also demonstrated increased reliability when compared with computerized versions of the tests. Conclusion The results of the measurement equivalence tests caution against treating computer- and paper-based versions of spatial measures as equivalent. We encourage subsequent work to demonstrate test mode equivalence prior to the utilization of spatial measures because current evidence suggests they may not reliably capture the same construct. Application The assessment of test type differences may influence the medium in which spatial cognition tests are administered.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call