Abstract
Previous research on the effects of bias in criterion-related validation research has typically involved the use of statistical corrections for halo, leniency, and/or central tendency. We present arguments that likability and similarity of raters to ratees may constitute a form of predictor-related criterion bias. One cannot investigate this form of bias without clear understanding of method, predictor, and criterion constructs and careful direct measurement of each. Measurement and theorizing of method constructs is rarely, if ever, undertaken in criterion-related validation work. The results of a criterion-related validation of the use of quantitative and verbal ability and interview and role-play simulations to predict the performance of 372 federal investigative agents are reported. Using the all-Y LISREL model (Williams & Anderson, 1994), we found that likability and similarity factors were related to interview and role play measures. However, none of these potential "biases" affected both predictor and criterion constructs, hence there was no effect on the estimates of the relationships between the predictors and criteria. Limitations with respect to the generalizability of these results to criterion-related research in which performance data are not as carefully collected as well as advantages and disadvantages of more traditional regression and correlational analyses are discussed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Organizational Behavior and Human Decision Processes
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.