Abstract

The rise in the affordability of quality video production equipment has resulted in increased interest in video-mediated tests of foreign language listening comprehension. Although research on such tests has continued fairly steadily since the early 1980s, studies have relied on analyses of raw scores, despite the growing prevalence of item response theory in the field of language testing as a whole. The present study addresses this gap by comparing data from identical, counter-balanced multiple-choice listening test forms employing three text types (monologue, conversation, and lecture) administered to 164 university students of English in Japan. Data were analyzed via many-facet Rasch modeling to compare the difficulties of the audio and video formats; to investigate interactions between format and text-type, and format and proficiency level; and to identify specific items biased toward one or the other format. Finally, items displaying such differences were subjected to differential distractor functioning analyses. No interactions between format and text-type, or format and proficiency level, were observed. Four items were discovered displaying format-based differences in difficulty, two of which were found to correspond to possible acting anomalies in the videos. The author argues for further work focusing on item-level interactions with test format.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.