Abstract

The overarching goal of this dissertation is to increase the precision and efficiency of the measurement tools that are used to make selection decisions in industrial/organizational psychology, by introducing psychometric innovations in the framework of computerized adaptive testing (CAT). Chapter 1 presents a general introduction to CAT and item response theory (IRT). Chapter 2 illustrates an automatic online calibration design that can be used in adaptive testing. The method makes it more attractive for respondents to participate during calibration, and increases the speed with which a CAT item bank can be calibrated. Chapter 3 demonstrates a straightforward method for conducting a test of measurement invariance and illustrates a method for modeling differential item functioning by assigning group-specific item parameters in the framework of IRT. Chapter 4 illustrates a method for verifying the results of an unproctored Internet test by using an extension of the stochastic curtailed truncated sequential probability ratio test (SCTSPRT). Simulation studies indicated that the SCTSPRT was almost four times shorter than a linear testing method while maintaining the same power of detection. Chapters 5 and 6 investigate the possibility of increasing the precision and shortening the test length of typical employment tests by efficiently administering and scoring items with multidimensional computerized adaptive testing (MCAT). Chapter 5 explores the possibility of using MCAT for administering and scoring the Adjustable Competence Evaluation; a computer adaptive cognitive ability test used in organizational selection. Chapter 6 explores the potential of administering and scoring items with MCAT for the NEO PI-R; a widely used personality test.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call