Abstract

In standardized multiple-choice testing, examinees may change their answers for various reasons. The statistical analysis of answer changes (ACs) has uncovered multiple testing irregularities on large-scale assessments and is now routinely performed at many testing organizations. This article exploits a recent approach where the information about all previous answers is used only to partition administered items into two disjoint subtests: items where an AC occurred and items where an AC did not occur. Two optimal statistics are described, each measuring a difference in performance between these subtests, where the performance is estimated from the final responses. Answer-changing behavior was simulated, where realistic distributions of wrong-to-right, wrong-to-wrong, and right-to-wrong ACs were achieved under various conditions controlled by the following independent variables: type of test, amount of aberrancy, and amount of uncertainty. Results of computer simulations confirmed the theoretical constructs on the optimal power of both statistics and provided several recommendations for practitioners.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.