Abstract
In personnel selection, variations in the interpretation and application of the Hunter and Schmidt meta‐analytic procedures often preclude a meaningful comparison between validity coefficients estimated by different meta‐analyses. In order to increase the comparability and accuracy of these coefficients, a standardized set of procedures and cumulated artifact distributions were used to correct 20 widely reported meta‐analytic validity coefficients estimated for six personnel selection methods (using job performance as the criterion). Structured interviews and cognitive ability tests demonstrated the highest operational validity. Seventeen of the 20 coefficients were found to be affected by moderators according to the Hunter and Schmidt 75% rule. On average, around 50% of variance in the meta‐analytic coefficients was explained by the correctable experimental artifacts of sampling error, direct range restriction in the predictor variable, and criterion unreliability. Limitations of this exercise and its implications for future research and practice are discussed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Occupational and Organizational Psychology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.