Abstract

To what extent do special-purpose criteria for assessingspokeninteractionrecontextualize in the assessment setting what counts as communicativecompetenceinprofessional settings? Current work in discourse analysis brings a critical perspectiveto bearonthe primarily linguistic orientation of traditional assessment procedures. We examine thisissuebydiscussing two contrasting projects. An Australian study of immigrant healthprofessionalsfounda mismatch between successful performance on a test ofoccupation-specificEnglish-languagecommunicative ability and clinical supervisors perceptions ofthe adequacy oftheir superviseesEnglish communication skills. To investigate this discrepancy,tape-recordings ofperformancesfrom prior test administrations were re-rated by ESL raters andthe native-speakingclinicalsupervisors involved. Expected differences between both groups ratingswere not found;the testwas apparently not getting at those aspects of communicative competencewhichconcerned thesupervisors. An American project, involving ethnographic and discourseanalyticmethods,documented routine practices through which members of a university physicsresearchgroup aresocialized into field-specific discourse practices. It was found that nativeandnon-native English-speaking members routinely prepared for upcoming conferences bysimulating performances of their presentations which were then evaluated by the group. Analysisrevealed the wide range of details oriented to by the physicists as they discussed theperformances. Locating comparable assessment activities in other professional settings maycastlight on criteriain these settings relevant to the assessment of special-purpose spokeninteraction.This information may be used to critique existing test criteria and may possibly also beused as the basis for establishing more relevant ones.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call