Abstract

Researchers are generally advised to provide rigorous item-level construct validity evidence when they develop and introduce a new scale. However, these precise, item-level construct validation efforts are rarely reexamined as the scale is put into use by a wider audience. In the present study, we demonstrate how (a) item-level meta-analysis and (b) substantive validity analysis can be used to comprehensively evaluate construct validity evidence for the items comprising scales. This methodology enables a reexamination of whether critical item-level issues that may have been supported in the initial (often single study) scale validation process—item factor loadings and theorized measurement model fit, as examples—hold up in a larger set of heterogeneous samples. Our demonstration focuses on a commonly used scale of task performance and organizational citizenship behavior, and our findings reveal that several of the items do not perform as may have been suggested in the initial validation effort. In all, our study highlights the need for researchers to incorporate item-level assessments into evaluations of whether construct scales perform as originally promised.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.