Abstract

The current study evaluated the effectiveness of two-stage testing on English and French versions of a science achievement test administered to a national sample in Canada in 1996 and 1999. The tests were administered and scored with the implicit assumption that the two language forms were equivalent. Analysis of the first-stage test revealed that 3 out of 12 items displayed differential item functioning (DIF) in both administrations. However, substantive reviews suggested that translation errors were not the cause of DIF. Analysis of the second-stage test revealed that the test was not comparable between ability groups but was comparable for English and French examinees within each ability group in both administrations. This study illustrates how test developers can monitor their adaptation and administration process when alternative testing procedures are used with multiple language groups. The results are also relevant to cross-cultural researchers who compare examinees from different ethnic and cultural backgrounds.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.