Abstract

The objective of this study is to analyse the quality of multiple-choice questions by applying such evaluative tools as a difficulty index (DIF), a discrimination index (DI), and a distractor efficiency (DE). Our analysis was based on the results of quizzes conducted in Trimester III of 2020-2021 for the course of Professional English at Astana IT University. DIF, DI, and DE of midterm quiz results of first year students with different language levels were analysed using Microsoft Excel and Moodle LMS. This study will examine multiple-choice questions (MCQs) in terms of their effectiveness with recommendations for improvement. The main three research questions are: What is the difficulty index of midterm multiple-choice questions? What is the discrimination index of midterm MCQ quizzes? What is the distractor efficiency of midterm MCQ distractors? As the study showed, the difficulty index of MCQs was below average as only 43 % (n=13) reached an acceptable level. Therefore, 57 % (n=17) of the questions that fall under the category of “too easy” should be revised in the test. The discrimination index evaluation revealed 100 % efficiency. As for the distractor efficiency, the study results equalled 88 %, which was a more than satisfactory level. By applying the DIF, DE and DI tools on a regular basis MCQ quizzes and distractors might be improved, leading to an annual accumulation of a better-quality pool of questions for a more effective student assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call