Abstract

This study investigates the impact of raters’ language background on their judgements of the speaking performance in the College English Test-Spoken English Test (CET-SET) of China, by comparing the rating patterns of non-native English-speaking (NNES) teacher raters, who are currently employed to assess performance on the CET-SET, with those of ‘ideal’ norm-owning native English-speaking (NES) teacher raters. Many-facet Rasch measurement and content analysis were applied to analyse the scores and stimulated recall data collected from the two rater groups. The results indicate that, although NES and NNES raters have somewhat different approaches to rating, the outcomes of the rating process are broadly similar, as are the categories that inform their judgements. We discuss the implications of these results for using raters from different language backgrounds for scoring high-stakes speaking tests, for the debate on NS norms for language testing in general and for the validity of the CET-SET rating scale in particular.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call