level and is expected to have several distinct subdimensions, it is important to think about not only the relationships between the measures and these first-order subdimensions but also about the relationships between the first-order subdimensions and the second-order construct they measure. My colleagues and I (Jarvis et al. forthcoming) provide several examples of marketing constructs conceptualized at this level of abstraction (e.g., market orientation, trust, helping behavior, perceived risk, etc.) and call attention to the fact that the measurement model relating the first-order subdimensions to the measures need not be the same as the measurement model relating the second-order construct to its first-order subdimensions. Although one could reasonably argue that all constructs should be unidimensional, our review suggests that such a view is often inconsistent with the way constructs are defined in the marketing literature. So as a practical matter, this is something authors should think about carefully. Defend the Construct Domain and Insist on the Conceptually Appropriate Measurement Model Do not sacrifice construct validity at the altar of internal consistency reliability. Although this is good advice regardless of whether the measures are reflective or formative, it is particularly important to remember when your construct has formative measures, because formative indicator measurement models do not imply high levels of internal consistency reliability (Bollen and Lennox 1991). Therefore, when your measures are formative, it is important to resist the temptation to delete items as a means of improving Cronbach’s alpha (internal consistency reliability). Following this advice may be difficult if the temptation comes from a reviewer in the form of a recommendation. However, you must be vigilant, because the likelihood of inappropriately restricting the domain of the construct and threatening construct validity tends to be greater when formative indicators are eliminated than when reflective indicators (of a unidimensional construct) are eliminated (cf. Bollen and Lennox 1991). The best way to avoid this unpleasant situation is to head it off by carefully discussing the construct domain and the hypothesized relations between the construct and its measures, and explicitly noting the implications of your measurement model for how it should be evaluated. If internal consistency reliability is irrelevant, consider providing other evidence of reliability (e.g., item test-retest reliability) and/or empirically examining the sensitivity of your findings to different assumptions of reliability. In other words, try to gently educate the reviewers of your manuscript as a means of avoiding problems in the review process, always remembering that your arguments will be more convincing if they come in advance rather than in response to a reviewer’s criticisms. In conclusion, the problems of poor construct validity and statistical conclusion validity that plague many manuscripts can be minimized if you carefully define the focal constructs in your research, make sure that your measures fully represent them, correctly specify the relations between the measures and constructs in your measurement model, and stick to it. I believe that following this advice will greatly improve your chances of publication success— perhaps more than anything else I might recommend.
Read full abstract