Abstract

Despite the crucial importance of the notion of parallel forms within Classical Test Theory, the degree of parallelism between two forms of a test cannot be directly verified due to the unobservable nature of true scores. We intend to overcome some of the limitations of traditional approaches to analyzing parallelism by using the Differential Item Functioning framework. We change the focus on comparison from total test scores to each of the items developed during test construction. We analyze the performance of a single group of individuals on parallel items designed to measure the same behavioral criterion by several DIF techniques. The proposed approach is illustrated with a dataset of 527 participants that responded to the two parallel forms of the Attention Deficit-Hyperactivity Disorder Scale (Caterino, Gómez-Benito, Balluerka, Amador-Campos, & Stock, 2009). 12 of the 18 items (66.6%) show probability values associated with the Mantel χ 2 statistic of less than .01. The standardization procedure shows that half of DIF items favoured Form A and the other half Form B. The “differential functioning of behavioral indicators” (DFBI) can provide unique information on parallelism between pairs of items to complement traditional analysis of equivalence between parallel test forms based on total scores.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.