Abstract

During the pandemic, the use of question pools for online testing was recommended to mitigate cheating, exposing multitudes of STEM students across the globe to this practice. Yet little systematic analysis of the practice apparently exists. In this study, we undertook an investigation of student performance on our questions in online exam pools across several STEM courses: upper‐level physiology, general chemistry, and introductory physics. We found that the difficulty of creating analogous questions in a pool varied by question type, with quantitative problems being the easiest to vary without altering average student performance. However, when instructors created pools by rearranging aspects of a question, posing opposite counterparts of concepts, or formulating questions assessing the same learning objective, we sometimes discovered student learning differences between seemingly closely‐related ideas, illustrating the challenge of our own expert blind spot. We provide suggestions for instructors on steps to take to improve the equity of question pools, such as being cautious in how many variables one changes in a specific pool and “test driving” proposed questions in lower stakes assessments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call