Abstract
Eye movements can reflect task demands on a procedural level beyond the classical methods of evaluating test scores, suggesting that eye tracking is a method for item analysis that can be utilized to explore test and item structures.
Highlights
Since the development of the Force Concept Inventory (FCI) [1], the design and use of research-based distractordriven multiple-choice items has accelerated
We conclude that eye movements can reflect task demands on a procedural level well beyond the classical methods of evaluating test scores, eventually making eye tracking an additional method for item analysis that can be utilized to confirm or explore test and item structures
As we show some eye-tracking studies already exist for multiple-choice tests in physics education research (PER); a grouping of test items based on apparently similar requirements has not yet been done
Summary
Since the development of the Force Concept Inventory (FCI) [1], the design and use of research-based distractordriven multiple-choice items has accelerated. The PhysPort collection comprises more than 90 research-based assessments for introductory and upper-level physics concepts, scientific reasoning, problem solving, or student attitudes and beliefs [2]. The methodologies utilized to create and evaluate these assessments include student interviews, expert reviews, and statistical analysis of the instruments’ psychometric properties. The way in which the test participants visually interact with the tasks has hardly been taken into account so far, especially not for the purpose of a systematic instrument or item analysis. Previous studies— one using the FCI, the others using the test of understanding graphs in kinematics (TUG-K) [3]—have examined the test takers’ visual attention distribution on the stems and options
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have