Abstract

As part of the process of development of spoken language assessment procedures in occupational settings, it is common practice to use occupational experts as informants. The rating process, however, more commonly relies exclusively upon the judgements of language-trained specialists. Research to date has produced conflicting findings concerning the relative harshness and other characteristics of language-trained raters versus ‘naïve’ native speaker or occupational expert raters. Thisissue is considered in the context of a recent standard-setting project carried out for the Occupational English Test [McNamara, T. F. (1990), Assessing the second language proficiency of health professionals. Unpublished doctoral dissertation, University of Melbourne; McNamara, (1990) Measuring second language performance. London: Longman; Lumley et al., (1994), A new approach to standard-setting in language assessment, Melbourne Papers in Language Testing, 3(2), 19–39], an occupation-specific test of English for overseas-trained health professionals administered on behalf of the Australian Government. The study was conducted in response to criticism of the standards applied in the test. Twenty audio recordings of role plays from recent administrations of the speaking sub-test were each rated by both ten trained ESL raters and ten medical practitioners. The ratings produced were analysed to compare the extent of agreement reached by the two groups of judges concerning candidates’ language proficiency, as well as group and individual differences in interpretations of the rating scale used. Broad similarities in judgements found between the two groups indicate that the practice of relying on ESL-trained raters is essentially justified.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call