ance of aggregation biases, and so on. Franses (2005) provides an inventory of commonly used diagnostic tests. Franses (see Table 2 for 1998–2000 and Table 3 for 2001– 2003) also shows that there is a paucity of actual use of diagnostic tests in articles published in JMR. Such paucity may suggest that researchers have a disincentive to conduct all relevant tests. For example, it might be imagined that though researchers prefer to publish a valid model rather than one that is invalid, publishing an invalid model is still preferred to not publishing one, especially if it is difficult for readers to detect model deficiencies. Given data constraints, it is virtually impossible for researchers to accommodate all possible nuances. Thus, researchers rely on theories and experience to decide which aspects are the most critical to include in a model. With accumulating empirical evidence in the literature, the expectation is that future modeling efforts will be more informed and thus likely to provide increasingly useful (i.e., valid and reliable) results. However, I urge researchers to consult the checklist that Franses provides and to conduct all diagnostic tests when appropriate. In applied econometrics, there are three possible reasons for specific tests not to be used. First, researchers may argue convincingly that a test does not apply in the model’s context. For example, testing the null hypothesis of zero autocorrelation in the error term in a model of purely cross-sectional data is irrelevant. (Separately, I note that the use of generalized least squares rather than ordinary least squares to accommodate serial correlation in time series data is a technical correction that is not convincing unless the researcher can justify how serial correction logically arises for an otherwise correctly specified model.) Second, some tests may not yet be available for cases other than linear models and normally distributed errors. Third, researchers may argue that the violation of a particular assumption does not invalidate the substantive results. For example, consistency of ordinary least squares does not require normality of the error term. In all other cases, it is the researchers’ responsibility to conduct and show appropriate diagnostic tests. The model cannot be assumed to be valid unless proper diagnostic tests fail to reject the assumptions. Because all models are incomplete representations of reality, a central question is: How can a model that is superior to a meaningful alternative (e.g., judgment or a simpler representation than the proposed model) be obtained? If the interest of the researcher is to discover how marketing activities affect purchases or other responses (as in a “causal” model), any comparison to a model without marketing variables seems useless. Still, it might be argued that Beginning with the August 2003 issue, Journal of Marketing Research (JMR) has published one or more comments on the lead article, followed by a rejoinder. I have asked experts to provide commentary on one article in each issue that I believe has especially relevant content for researchers and managers. In the current issue, the lead article is an invited paper for which I also asked several experts to provide comments. In all cases, I provide an opportunity for the author(s) of the original article to prepare a rejoinder in the same issue. My hope is that such related reflections and commentaries on a current topic will enhance the value of JMR to readers. Although I see no reason to entice authors to express strong disagreements about specific issues, I expect that such collections of articles will enable readers to become more informed about differences in perspectives that researchers with substantial expertise and experience have on important issues.