Abstract

A year-long study of 131 second and third graders in 12 classrooms compared three daily 20-minute treatments. a) Fifty-eight students in six classrooms used the 1999–2000 version of Project LISTEN's Reading Tutor, a computer program that uses automated speech recognition to listen to a child read aloud, and gives spoken and graphical assistance. Students took daily turns using one shared Reading Tutor in their classroom while the rest of their class received regular instruction. b) Thirty-four students in the other six classrooms were pulled out daily for one-on-one tutoring by certified teachers. To control for materials, the human tutors used the same set of stories as the Reading Tutor. c) Thirty-nine students served as in-classroom controls, receiving regular instruction without tutoring. We compared students' pre-to post-test gains on the Word Identification, Word Attack, Word Comprehension, and Passage Comprehension subtests of the Woodcock Reading Mastery Test, and in oral reading fluency. Surprisingly, the human-tutored group significantly outgained the Reading Tutor group only in Word Attack (main effects p > .02, effect size .55). Third graders in both the computer- and human-tutored conditions outgained the control group significantly in Word Comprehension ( p > .02, respective effect sizes .56 and .72) and suggestively in Passage Comprehension ( p = .14, respective effect sizes .48 and .34). No differences between groups on gains in Word Identification or fluency were significant. These results are consistent with an earlier study in which students who used the 1998 version of the Reading Tutor outgained their matched classmates in Passage Comprehension ( p = .11, effect size .60), but not in Word Attack, Word Identification, or fluency. To shed light on outcome differences between tutoring conditions and between individual human tutors, we compared process variables. Analysis of logs from all 6,080 human and computer tutoring sessions showed that human tutors included less rereading and more frequent writing than the Reading Tutor. Micro-analysis of 40 videotaped sessions showed that students who used the Reading Tutor spent considerable time waiting for it to respond, requested help more frequently, and picked easier stories when it was their turn. Human tutors corrected more errors, focused more on individual letters, and provided assistance more interactively, for example getting students to sound out words rather than sounding out words for students as the Reading Tutor did.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call