Abstract

Abstract Researchers have recommended involving domain experts in the design of scoring rubrics of language for specific purpose tests by eliciting profession-relevant, indigenous criteria and applying these to test performances (see, e.g., Douglas, 2001; Jacoby, 1998; Pill, 2016). However, these indigenous criteria, derived as they are from people outside the assessment field, may be difficult to apply by the non-domain expert raters typically employed to rate performances on language tests. This paper addresses this question with reference to the writing component of the Occupational English Test (OET), a test designed to assess the English communication skills of overseas-trained health professionals. The paper describes the development of a set of professionally-relevant writing descriptors and then explores how well language-trained raters (N = 15) were able to apply these to a set of OET writing samples. All raters were interviewed and the rating data were analysed statistically. The findings show that while the statistical properties of the score data were generally satisfactory, some of the raters felt that they were not able to apply the scale confidently due to their perceived lack of medical knowledge. The study has implications for scale design, rater training and the use of professionally-relevant rating scales for LSP testing purposes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call