Abstract

ABSTRACT Language assessment is increasingly computermediated. This development presents opportunities with new task formats and equally a need for renewed scrutiny of established conventions. Recent recommendations to increase integrated skills assessment in lecture comprehension tests is premised on empirical research that demonstrates enhanced construct coverage over conventional selected response formats such as multiple-choice. However, the comparison between response formats is underexplored in computer-mediated assessment and does not consider test item presentation methods that this technology affords. To this end, the present study investigates performance in a computer-mediated lecture comprehension task by examining test taker accounts of task completion involving multiple-choice questions without question preview and integrated response formats. Findings demonstrate overlap between the formats in terms of several core processes but also point to important differences regarding the prioritization of aspects of the lecture, memory and test anxiety. In many respects, participant comments indicate the multiple-choice format measured a more comprehensive construct than the integrated format. The research will be relevant to individuals with interests in computer-mediated assessment and specifically with a responsibility for developing and validating lecture comprehension assessments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.