Abstract

This article takes up some of the issues identified by Douglas (2000) as problematic for Language for Specific Purposes (LSP) testing, making reference to a number of performance-based instruments designed to assess the language proficiency of teachers or intending teachers. The instruments referred to include proficiency tests for teachers of Italian as a foreign language in Australia (Elder, 1994) and for trainee teachers using a foreign language (in this case English) as medium for teaching school subjects such as mathematics and science in Australian secondary schools (Elder, 1993b; Viete, 1998). The first problem addressed in the article has to do with specificity: how does one define the domain of teacher proficiency and is it distinguishable from other areas of professional competence or, indeed, from what is often referred to as ‘general’ language proficiency? The second problem has to do with the vexed issue of authenticity: what constitutes appropriate task design on a teacher-specific instrument and to what extent can ‘teacher-like’ language be elicited from candidates in the very artificial environment of a test? The third issue pertains to the role of nonlanguage factors (such as strategic competence or teaching skills) which may affect a candidate’s response to any appropriately contextualized test-task and whether these factors can or should be assessed independently of the purely linguistic qualities of the test performance. All of these problems are about blurred boundaries, between and within real world domains of language use, between the test and the nontest situation, and between the components of ability or knowledge measured by the test. It is argued that these blurred boundaries are an indication of the indeterminacy of LSP, as currently conceptualized, as an approach to test development.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call