Clinicians often suspect that a treatment effect can vary across individuals. However, they usually lack “evidence-based” guidance regarding potential heterogeneity of treatment effects (HTE). Potentially actionable HTE is rarely discovered in clinical trials and is widely believed (or rationalized) by researchers to be rare. Conventional statistical methods to test for possible HTE are extremely conservative and tend to reinforce this belief. In truth, though, there is no realistic way to know whether a common, or average, effect estimated from a clinical trial is relevant for all, or even most, patients. This absence of evidence, misinterpreted as evidence of absence, may be resulting in sub-optimal treatment for many individuals. We first summarize the historical context in which current statistical methods for randomized controlled trials (RCTs) were developed, focusing on the conceptual and technical limitations that shaped, and restricted, these methods. In particular, we explain how the common-effect assumption came to be virtually unchallenged. Second, we propose a simple graphical method for exploratory data analysis that can provide useful visual evidence of possible HTE. The basic approach is to display the complete distribution of outcome data rather than relying uncritically on simple summary statistics. Modern graphical methods, unavailable when statistical methods were initially formulated a century ago, now render fine-grained interrogation of the data feasible. We propose comparing observed treatment-group data to “pseudo data” engineered to mimic that which would be expected under a particular HTE model, such as the common-effect model. A clear discrepancy between the distributions of the common-effect pseudo data and the actual treatment-effect data provides prima facie evidence of HTE to motivate additional confirmatory investigation. Artificial data are used to illustrate implications of ignoring heterogeneity in practice and how the graphical method can be useful.