It is an honor to be invited to comment on Nick, John andLaura’s paper. The “formative measurement” theme hasroiled methods literatures across the social sciences.Despite the number of publications on this topic, uncertaintyremains widespread, and Lee et al.’s(2013) paper shouldresolve some of this. I agree with their argument that aMIMIC model does not represent simultaneous “formativemeasurement” and “reflective measurement.” Factor-basedstructural equationmodeling is,indeed, afactor-based meth-od. If the MIMIC model is correct in the population, thenthe variance explained by the set of predictors, plus thevariance of an error term if present, will exactly equal thevariance of the factor which the MIMIC model engenders.Adding another predictor will not change this total variance—but using a different set of factor indicators may wellchange the total variance, demonstrating that the model isabout the factor.Still, I have significant differences with Lee, CadoganandChamberlain’sperspective andrecommendations.Inmyopinion, vague language lies at the root of the problem, somy own prescription would begin with scrapping the terms“construct,”“latent variable,”“reflective measurement,”and “formative measurement” at least as they are currentlyused. Their recommendation regarding fixing weights is afine idea, but our failure to fix model parameters moregenerally points to the failure of decades of published re-search to actually reveal much about the topics of study.Instead of the narrow issue and closely tailored solutionpresented by Lee, Cadogan and Chamberlain, researchersneed to confront the fundamental, unresolved problemswhich lie at the very root of psychological measurement.LanguageIn the course of their discussion, Lee, Cadogan andChamberlain note, “The situation is not helped by the pleth-ora of different terms used in the literature…” Rather thanbeing only an aggravating circumstance, however, vagary oflanguage is one of the most central problems not only in the“formative measurement” literature but across the field ofpsychological measurement. The terms “construct” (Maraunand Gabriel 2013; Rigdon 2012) and “latent variable” (asnoted by Lee et al. 2013; Maraun 1996) both carry multiplemeanings, with the result (and, one sometime suspects, theintent) that thinking is confused and distinctions areobscured.In typical research situations, researchers working withmultiple indicators operate at three levels of abstraction(Rigdon 2012). Researchers typically wish to make inferen-ces at the most abstract level, about the behavior of variableswhich are inherently unobservable but whose existence isimplied by theory. At the least abstract level, researchersmake their inferences on the basis of data, collected byobservation of one sort or another. At the intermediate level,in order to empirically evaluate their inferences, researchersform representations of the abstract variables using theobserved variables. Depending on statistical method, theserepresentations may be common factors, weighted compo-sites, or something else.“Construct” and “latent variable” are used to denote varia-bles at both the most abstract level and at the intermediatelevel. In places, Lee, Cadogan and Chamberlain seem to usethe term “latent variable” in precisely this way, referring toboth conceptual variables and intermediate level representa-tions.Thisblurringofdistinctionshasencouragedresearchersto believe that their empirical representations, at the interme-diatelevel,areequivalenttotheconceptualvariablesstandingatthe highest level of abstraction (Maraun and Gabriel 2013).