Abstract

Item response theory is a modern model-based measurement theory. More recently, the popularity of the IRT models due to many important research applications has become apparent. The purpose of conducting explanatory and confirmatory factor analysis is to explore the interrelationships between the observed item responses and to test whether the data fit a hypothesized measurement model. The item response models applied to undergraduate statistics exams show how the trait level estimates from the models depend on both examinees ´ responses and on the properties of the administered items. The reliability analysis indicates that both exams measure a single unidimensional latent construct, ability of examinees very well. Two-factor model is obtained from the explanatory factor analysis of the second exam. Based on several goodness of fit indices confirmatory factor analysis verifies again the obtained results in the explanatory factor analysis. We fit the testlet-based data with the dichotomous and polytomous item response models. Estimated item parameters and total information functions from the three different models are compared with each other. Difficulty item parameters estimated from one and two parameter logistic item response functions correlate highly. The first statistics exam is a good test measurement since examinees with all level of abilities measured with the questions of different difficulty along the whole scale. Several more difficult questions are needed to measure high-proficiency examinees in the second round. The polytomous IRT model provides more information than the two parameter logistic item response model only in high-ability level.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call