Abstract

Multidimensional Item Response Theory (MIRT) has been proposed as a means to model the relation between examinee abilities and test responses. Three recent articles proved that when MIRT is used in ability estimation, an examinee’s score could theoretically decrease due to a correct answer or increase due to an incorrect answer. The current article examines the extent to which such “paradoxical results” can arise in practice. In an operational test designed to measure two dimensions, a substantial percentage of paradoxical results occurred when using a MIRT model with a prior correlation of 0 between abilities. Assuming a positive correlation between abilities reduced the prevalence of paradoxical results but did not eliminate them entirely. Associated issues in test fairness are discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.