Morrissey (2016) is an enjoyable but challenging read that highlights misapplication of meta-analysis to questions in evolutionary biology. The problems highlighted in the three case studies all arise when estimating the mean magnitude rather than the mean value of a relationship (i.e. using absolute rather than signed effect sizes). A statistical maven speaks, but the language remains technical, and the message might be lost, or worse, misunderstood. I therefore focused my efforts on summarizing some key messages in a form that I could use to teach students. My commentary is directed to such readers. The result is a cartoon (Fig. 1). I hope it provides accessible insights into the problems Morrissey raised. We can note the following: None of the above qualifiers negate Morrissey's insight that transforming then analysing observed effect sizes inflates the estimated mean magnitude of an effect. The technical validity of Morrissey's analyse-then-transform mixed model approach to resolve the problem is beyond me, but it makes sense because it uses the appropriate variances. Ultimately, Fig. 1 simply illustrates that variances are being misspecified for meta-analysis of absolute values. In hindsight, the problem is fairly obvious, but in what other situations do problems arise? Morrissey suggests that ‘many quantities of potential meta-analytic interest might best be obtained by modeling the distribution of quantities that are reported in the literature’ but which? Primary studies (‘the literature’) can report findings in ways that violate underlying model assumptions and bias estimates. For example, a publication bias towards statistically significant results generates an asymmetric distribution of effect sizes that biases mean estimates upwards for nonzero true effects (Jennions et al., 2013). A similar problem arises for ‘quantities’ that tend to go unreported when negative, such as heritability. Also, some reported ‘quantities’ already have distributions that violate assumptions underlying standard meta-analyses (Mengersen & Gurevitch, 2013). Unfortunately, Morrissey's three case studies all seem to vary on the ‘absolute value’ problem. A longer list of problematic quantities (whether reported in the primary literature or ‘literature reports’) could help to identify broader categories of concern. In my view, highlighting ‘quantities that do not depend on the dispersion of the values reported in the literature’ is unhelpful. The follow-up suggestion to be cautious if ‘the quantity of interest is an aspect of the dispersion’ is intriguing, and I do not dispute it, but the underpinning reasoning is opaque. Morrissey's case studies are excellent reminders that conceptual problem are often associated with an incorrect or even unstated null hypothesis. The sexual antagonism case study is a great example. Whenever estimates are imprecise, secondary relationship will contain spurious pairings. Morrissey cleverly illustrates this by simulating pairs of estimated selection gradients where there is no selection on either sex. Estimates of sexual antagonism arose in 50% of cases (his fig. 3c). Simulations are indeed valuable, but you do not always need a formal simulation. Here, simply consider what happens when you toss a coin twice – in 50% of cases, you get a head (positive) and a tail (negative). It is a short leap to work out what happens with a coin that has a side bias. Morrissey concludes with a cautionary note that meta-analysis is reducing the use of qualitative synthesis (i.e. narrative reviews). No one can dispute that individual studies can be deeply insightful. However, it is always perilous to extrapolate. Textbooks are littered with nonreplicable studies that once seemed solid. There is no alternative to quantitatively synthesizing data from multiple studies. Perhaps we should refine our inclusion criteria (based on study design not outcome), but that merely means we should conduct better meta-analyses. Misapplication of many statistical analyses is rife, but we do not abandon them. If so, where to for mixed models? The same reasoning holds for meta-analysis.