Abstract

Background: Assessment plays an essential role in the evaluation of learning, and multiple-choice questions (MCQs) are one of the components of examinations. Item analysis enables identifying good MCQs based on difficulty index (DIF I), discrimination index (DI), and distractor effectiveness (DE). The assessment of learning will be more meaningful with routine item analysis. Aims and Objectives: The item analysis helps us to assess the validity and reliability of MCQs and to generate a valid MCQ question bank for future use. Materials and Methods: A set of 30 single best response type MCQs or items was used for assessment in 155 phase II MBBS students at a medical college in Mangalore. Each item was pre-validated before and later analyzed for its DIF I, DI, and DE. Results: In the study, the means of DIF I were 55.91 ± 18.48%, DI, was 0.35 ± 0.16%, and distractor efficiency was 72.19 ± 29.15%. Out of 30 items, 22 had a “good to acceptable” level of DIF I, and 18 items had “good to excellent” discrimination power. 81% were functional distractors among 120 distractors. A correlation r = 0.1968 indicates a positive correlation between DIF I and DI. Conclusion: The majority of the MCQs in this study fell within the acceptable range when measured against the three factors analyzed. This suggests that the key to developing valid and reliable MCQs is possible only when regular item analysis is done for all MCQ-based tests.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call