Abstract

This article reports on an investigation of the role raters’ language background plays in raters’ assessment of test takers’ speaking ability. Specifically, this article examines differences between American and Indian raters in their scores and scoring processes when rating Indian test takers’ responses to the Test of English as a Foreign LanguageTM Internet-Based Test (TOEFL iBT®) Speaking tasks. Three American and three Indian raters were asked to score 60 speech samples from 10 Indian test takers’ responses to TOEFL iBT Speaking tasks and to perform think-aloud protocols while scoring. The data were analyzed with Multifaceted Rasch and verbal protocol analyses. Findings indicate that Indian raters were better than American raters at identifying and understanding features of Indian English in the test takers’ responses. However, Indian and American raters did not differ in their use of scoring criteria, their attitudes toward Indian English, or in the internal consistency and severity of the scores.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call