Abstract

BackgroundThe item analysis of multiple choice questions (MCQs) is an essential tool that can provide input on validity and reliability of items. It helps to identify items which can be revised or discarded, thus building a quality MCQ bank. MethodsThe study focussed on item analysis of 90 MCQs of three tests conducted for 150 first year Bachelor of Medicine and Bachelor of Surgery (MBBS) physiology students. The item analysis explored the difficulty index (DIF I) and discrimination index (DI) with distractor effectiveness (DE). Statistical analysis was performed by using MS Excel 2010 and SPSS, version 20.0. ResultsOf total 90 MCQs, the majority, that is, 74 (82%) MCQs had a good/acceptable level of difficulty with a mean DIF I of 55.32 ± 7.4 (mean ± SD), whereas seven (8%) were too difficult and nine (10%) were too easy. A total of 72 (80%) items had an excellent to acceptable DI and 18 (20%) had a poor DI with an overall mean DI of 0.31 ± 0.12. There was significant weak correlation between DIF I and DI (r = 0.140, p < .0001). The mean DE was 32.35 ± 31.3 with 73% functional distractors in all. The reliability measure of test items by Cronbach alpha was 0.85 and Kuder-Richardson Formula 20 was 0.71, which is good. The standard error of measurement was 1.22. ConclusionOur study helped teachers identify good and ideal MCQs which can be part of the question bank for future and those MCQs which needed revision. We recommend that item analysis must be performed for all MCQ-based assessments to determine validity and reliability of the assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call