Abstract

The purpose of this project is to explore the feasibility of a new approach for producing evidence-based distractor sets. We use Common Wrong Answers (CWAs), and the associated performance data, generated by candidate responses to open gap-fill tasks, to produce distractor sets for multiple-choice gap-fill tasks based on the same texts. We then investigate whether these distractor sets are effective for use in language tests, in terms of empirical and qualitative review, and consider potential impacts on the production process for test material. This project explores a new and innovative method of content development, and raises the possibility of a new approach to item production that can semi-autogenerate test items in shorter periods of time without affecting quality or reliability. Although the approach is specific to one task type, it is hoped that further research will expand on the applications of the approach to deliver a version that may be operationalised for use across different task types in the development of language assessments.

Highlights

  • The process of producing tasks for English language tests is managed using principles of validity and reliability (Cambridge Assessment, 2017)

  • The first part reports on the performance of the distractors, which relates directly to RQ2 and to the evaluation of the effectiveness of distractors

  • The second part reports on the overall performance of the items and tasks as a whole, which must be taken into account when making determinations about the effectiveness of their distractor sets

Read more

Summary

Introduction

The process of producing tasks for English language tests is managed using principles of validity and reliability (Cambridge Assessment, 2017). Tasks are written by language experts to meet a pre-defined list of quality criteria and task specifications (validity), and performance data is collected and statistics reviewed by a panel of experts before a task is designated as fit for purpose (reliability). Two textbased tasks produced in this way, used in many language tests, are the focus of this study: the open gap-fill and the multiple-choice (MC) gap-fill. We outline a proposal for shortening the production process for one of these tasks, based on evidence drawn from the performance data of the other. 236 Journal of Higher Education Theory and Practice Vol 21(10) 2021

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call