Abstract

Integrated speaking tasks requiring test takers to read and/or listen to stimulus texts and to incorporate their content into oral performances are now used in large-scale, high-stakes tests, including the TOEFL iBT. These tasks require test takers to identify, select, and combine relevant source text information to recognize key relationships between source text ideas, and to organize and transform information. Despite being central to evaluations of validity, relationships between stimulus content, task demands, and the oral discourse produced by test takers are yet to be empirically scrutinized to an adequate degree. In this study, we focus on a TOEFL iBT reading–listening–speaking task, applying discourse analytic measures developed by Frost, Elder and Wigglesworth (2012) to 120 oral performances to examine (a) the integration of source text ideas by test takers across three proficiency levels, and (b) the appropriateness of content-related criteria in the TOEFL integrated speaking rubric. We then combine analyses of these aspects of performances with a qualitative analysis of the generic structure and semantic profiles of stimulus texts to explore relationships between stimulus text properties and oral performances. Findings suggest that the extent to which content-related rating scale criteria distinguish between proficiency levels is contingent on stimulus text properties, with important implications for construct definitions and task design.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call