Abstract

Among the variety of selected response formats used in L2 reading assessment, multiple-choice (MC) is the most commonly adopted, primarily due to its efficiency and objectiveness. Given the impact of assessment results on teaching and learning, it is necessary to investigate the degree to which the MC format reliably measures learners’ L2 reading comprehension in the classroom context. While researchers have claimed that the longer the reading test (i.e., more test items and passages), the higher its overall reliability, few studies have investigated the optimal number of items and passages required for reliable classroom-based L2 reading assessment. To address this research gap, I adopted generalizability (G) theory to investigate the score reliability of the MC format in classroom-based L2 reading tests. A total of 108 ESL students at an American college completed an English reading test that included four passages, each of which was accompanied by five MC comprehension questions. The results showed that the score reliability of the L2 reading test was critically influenced by the number of items and passages, inasmuch as a different combination of the number of passages and items altered the degree of reliability. Implications for practitioners and educational researchers are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call