Abstract
ABSTRACT Simulated data, validity reports and a firefighter predictive validation study are used to examine validity bias created by three common selection problems-range restriction, applicant and incumbent attrition, and nonlinearity created by compression of high selection test scores. Top 20% selection samples drawn from an applicant pool with known validity coefficients demonstrate that the sample validity estimates of the three predictors are differentially biased in both magnitude and direction, depending on the selection strategy used. Concurrent validity designs generally favor novel predictors. Corrections for direct range restriction across situations were mostly ineffectual. With proper scaling, corrections for indirect range restriction are accurate, but cross-variable biasing effects can occur when score distributions of the individual predictors differ. Many of the biases found in the simulation results are demonstrated in a firefighter predictive validation study where variations of Pearson-Thorndike range corrected validities and a full information maximum likelihood (FIML), approaches are all compared as validity assessments. With normalized predictors, both Pearson and FIML methods show that a test of general mental ability and physically demanding job tasks predicted firefighter performance throughout the 30-year study, with no evidence of interactions or a leveling of performance at high test scores.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.