While perusing the April 2012 issue, I was struck by the occurrence of the same graphical faux pas in at least three of the articles: the representation of the relationship between two variables by graphing a model-derived coefficient instead of the actual data. In the first article,1 both Figure 1 panels contain two straight lines. Each line, however, is really just a depiction of two points: (adjusted?) odds of an outcome for people living in low or high religious climates (“low” and “high” defined as below and above a median score). The reader is given the impression of a linear relationship between the outcome and the religious climate score, which may or may not be true; all one sees is a model-based straight line between two points. Each panel is really just a representation of four numbers. In the second article,2 Figures 1 and 2 each have just one curve that is a transformation of the coefficient of a logistic regression model (with intercept determined by nonspecified values of other variables) relating suicide risk to a media exposure latent factor. Yet each point has multiple numeric labels on it, giving the impression that these are the results of actual data, which is not the case. Worse, on the horizontal axis is a latent factor with an apparently meaningless scale; if it had been expressed in terms of standard deviations of the factor in the study population it would have been more interpretable. It would have been better to group the data by standard deviation intervals and show the average probability (“propensity” in the article) of suicide, or mean adjusted probability, or a boxplot of the probabilities in each interval. The third article3 has a graph (Figure 2) showing the probability of smoking as a function of exposure to one kind of advertising, with separate lines for another kind to show the nature of the interaction. Again, we see straight lines, each of which is really just the representation of one coefficient. This oversmoothing of the data is unwarranted; we should either see summaries of the actual data, or some less-smoothed version, say from multiple-knot spline fits. As it is, the reader is tempted to read off a specific probability given a specific exposure, which may be reasonable near the middle of the figure but is less reliable near the extremes. On a related note, none of these graphs gave any visual indication of the variability of the depicted curves, thus further contributing to the impression of accuracy of the (mainly nonexistent) data contained therein. When we have the data, we should show it, and not imply quantitative relationships that have not been quantitated. The figures cited above have an unusually low data density, which is not a desirable trait.4 Each of the articles appears meticulously researched and well written, and it may be that each of the figures in these articles tells a story that largely reflects the underlying data, but this should be left to the judgment of the reader upon seeing the data.