Abstract

Item responses on a multiple-choice reading comprehension test from 88 ESL subjects were submitted to classical test item analyses and the Rasch one-parameter latent trait model. The Rasch model identified more misfitting items than the classical item analyses and allowed the researchers to plot the reading comprehension items at their calibrated positions along the lines of the latent trait. The trait was defined at the easy end by items that assess literal comprehension and inference of explicit details and at the difficult end by items that assess multiple string inferences. The Rasch model also indicated that additional items were needed at both ends of the difficulty continuum, if subjects in those ranges are to be discriminated properly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call