Abstract

This article provides renewed converging empirical evidence for the hypothesis that asking test-takers to respond to text passages with multiple-choice questions induces response processes that are strikingly different from those that respondents would draw on when reading in non-testing contexts. Moreover, the article shows that the construct of reading comprehension is assessment specific and is fundamentally determined through item design and text selection. The data come from qualitative analyses of 10 cognitive interviews conducted with non-native adult English readers who were given three passages with several multiple-choice questions from the CanTEST, a large-scale language test used for admission and placement purposes in Canada, in a partially counter-balanced design. The analyses show that: • There exist multiple different representations of the construct of ‘reading comprehension’ that are revealed through the characteristics of the items. • Learners view responding to multiple-choice questions as a problem-solving task rather than a comprehension task. • Learners select a variety of unconditional and conditional response strategies to deliberately select choices; and • Learners combine a variety of mental resources interactively when determining an appropriate choice. These findings support the development of response process models that are specific to different item types, the design of further experimental studies of test method effects on response processes, and the development of questionnaires that profile response processes and strategies specific to different item types.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call