Abstract

This paper aims to assess the quality of a summative test items to improve its ability to measure students' knowledge acquisition. This test was used in the English subject for11th grade students. This study was administered at a Western district secondary school in Saudi Arabia. The test consisted of  22 multiple-choice questions used to collect data from 94 students randomly. The Kuder-Richardson Formula 20 (KR-20) was used for test items, to determine the internal consistency reliability, which reaches a good reliability of α = 0.70. Difficulty and discrimination indices were used as well to evaluate the quality of the test. In addition, the relationships between difficulty and discrimination indices are measured. The difficulty index analysis showed that 50% of the items are in the average level, while the rest of the items fluctuate among too difficult, moderately difficult, and too easy levels. Moreover, the difficulty index analysis showed that 45.0% of the items are in the good level, while the other items ranged differently among poor, acceptable, and excellent levels. The Pearson correlation coefficient (r) to estimate the relationship between the difficulty index and the discrimination index has a value of (-0.936), which indicates that there is a statistically significant relationship at the level (α≤0.05) between the difficulty index and discrimination index of the multiple-choice question summative test. To enhance the quality of this test, to better assess students' knowledge acquisition, this study recommends that items with too difficult and too easy levels of difficulty index, and items with poor discrimination index are to be reviewed and modified by English experts. Moreover, reevaluation of the content validity by an English teacher could further improve its quality as well.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call