Abstract

ABSTRACT Language assessment is increasingly computermediated. This development presents opportunities with new task formats and equally a need for renewed scrutiny of established conventions. Recent recommendations to increase integrated skills assessment in lecture comprehension tests is premised on empirical research that demonstrates enhanced construct coverage over conventional selected response formats such as multiple-choice. However, the comparison between response formats is underexplored in computer-mediated assessment and does not consider test item presentation methods that this technology affords. To this end, the present study investigates performance in a computer-mediated lecture comprehension task by examining test taker accounts of task completion involving multiple-choice questions without question preview and integrated response formats. Findings demonstrate overlap between the formats in terms of several core processes but also point to important differences regarding the prioritization of aspects of the lecture, memory and test anxiety. In many respects, participant comments indicate the multiple-choice format measured a more comprehensive construct than the integrated format. The research will be relevant to individuals with interests in computer-mediated assessment and specifically with a responsibility for developing and validating lecture comprehension assessments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call