Abstract

Multiple-choice exams are frequently used as an efficient and objective method to assess learning but they are more vulnerable to answer-copying than tests based on open questions. Several statistical tests (known as indices in the literature) have been proposed to detect cheating; however, to the best of our knowledge they all lack mathematical support that guarantees optimality in any sense. We partially fill this void by deriving the uniform most powerful (UMP) under the assumption that the response distribution is known. In practice, however, we must estimate a behavioral model that yields a response distribution for each question. We calculate the empirical type-I and type-II error rates for several indices that assume different behavioral models using simulations based on real data from twelve nationwide multiple-choice exams taken by 5th and 9th graders in Colombia. We find that the index with the highest power among those studied, subject to the restriction of preserving the type-I error, is one based on the work of Wollack (1997) and Linden and Sotaridona (2006) and is superior to the indices studied and developed by Wesolowsky (2000) and Frary, Tideman, and Watts (1977). We compare the results of applying this index to all 12 exams and find that examination rooms with stricter proctoring have a lower level of copying. Finally, a Bonferroni correction to control for the false positive rate is proposed to detect massive cheating.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call