Abstract

AbstractPerformance appraisals are widely used in organizations and most typically involve raters evaluating groups of subordinates along a set of items designed to represent job performance over a predetermined period (e.g., annually). A defining but often overlooked characteristic of performance appraisals is that they are cyclical. Since raters conduct appraisals over many cycles, it may be that measures of job performance are not equivalent across time. This is important because changes or differences in aggregated performance ratings can only be meaningfully interpreted if raters' definitions of job performance, interpretation of what the items mean, and their view of what constitutes the different levels of performance remain unchanged over time, unaffected by their experience with appraisals. Although critical to the interpretation of job performance scores, measurement invariance concerns are generally absent from the literature. The current research investigated the extent to which rater experience affected the conceptualization and measurement of performance using performance data from a major South American company which comprised information from raters and ratees through several appraisal cycles. In the between‐rater design, measurement invariance was analyzed using ratings of one performance appraisal cycle from 514 raters divided into groups according to their level of experience. The within‐rater design analyzed ratings from the same 80 raters in their first three appraisal cycles. In the between‐rater analysis, data supported measurement invariance across raters with different levels of experience. Results from the within‐rater analysis suggested that the job performance factor structure was not the same across cycles. Implications for research and practice are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call