Abstract

This study compares four item selection criteria for a two-category computerized classification testing: (1) Fisher information (FI), (2) Kullback—Leibler information (KLI), (3) weighted log-odds ratio (WLOR), and (4) mutual information (MI), with respect to the efficiency and accuracy of classification decision using the sequential probability ratio test as well as the extent of item usage. The comparability of the four item selection criteria are examined primarily under three types of item selection conditions: (1) using only the four item selection algorithms, (2) using the four item selection algorithms and content balancing control, and (3) using the four item selection algorithms, content balancing control, and item exposure control. The comparability of the four item selection criteria is also evaluated in two types of proficiency distribution and three levels of indifference region width. The results show that the differences of the four item selection criteria are washed out as more realistic constraints are imposed. Moreover, within two-category classification testing, the use of MI does not necessarily generate greater efficiency than FI, WLOR, and KLI, although MI might seem attractive for its general form of formula in item selection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.