Abstract

One of the most common technology-enhanced items used in large-scale K-12 testing programs is the drag-and-drop response interaction. The main research questions in this study are: (a) Does adding a drag-and-drop interface to an online test affect the accuracy of student performance? (b) Does adding a drag-and-drop interface to an online test affect the speed of student performance? In three different experiments involving fourth, sixth, and eighth graders, respectively; students answered reading comprehension questions presented in conventional (i.e., paper-based design) or drag-and-drop formats. The tests consisted of four-sentence ordering items in Experiment 1, four graphic organizer items in Experiment 2, and two cloze tests and two graphic organizer items in Experiment 3. The conventional and drag & drop groups were compared on test performance (i.e., accuracy) and efficiency (i.e., response time and number of mouse clicks). Across the three experiments, the conventional and drag & drop groups did not differ in mean performance, but the drag & drop group responded more efficiently than the conventional group (faster response time, d = 0.62, and fewer mouse clicks, d = 1.13).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call