Abstract
In both theoretical and applied literatures, there is confusion regarding accurate values for expected Black–White subgroup differences in personnel selection test scores. Much confusion arises because empirical estimates of standardized subgroup differences (d) are subject to many of the same biasing factors associated with validity coefficients (i.e., d is functionally related to a point‐biserial r). To address such issues, we review/cumulate, categorize, and analyze a systematic set of many predictor‐specific meta‐analyses in the literature. We focus on confounds due to general use of concurrent, versus applicant, samples in the literature on Black–White d. We also focus on potential confusion due to different constructs being assessed within the same selection test method, as well as the influence of those constructs on d. It is shown that many types of predictors (such as biodata inventories or assessment centers) can have magnitudes of d that are much larger than previously thought. Indeed, some predictors (such as work samples) can have ds similar to that associated with paper‐and‐pencil tests of cognitive ability. We present more realistic values of d for both researcher and practitioner use. Implications for practice and future research are noted.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.