Abstract
Three major reviews of the assessment center (AC) construct-validity literature have disagreed as to the most appropriate analytic model for AC postexercise dimension ratings. We report a Monte Carlo study addressing the following questions: (a) To what extent does the “true” model (i.e., the model that generated the data) actually appear to fit the data well? (b) To what extent can a model appear to fit the data well even though it is the wrong model? and (c) Is model fit actually a useful empirical criterion for judging which model is most likely the population model? Results suggest that “true” models may not always appear as the best fitting models, whereas “false” models sometimes appear to offer better fit than the true models. Implications for the study of AC construct validity are discussed.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have