Abstract

AbstractInfant research notoriously suffers from small samples, resulting in low power. Beyond increasing sample sizes, improving the reliability of our measurements can also increase power and help find more reliable effects. Byers‐Heinlein, Bergmann and Savalei (2021) provide both an analysis of the problem of (low) reliability and a number of valuable recommendations. One of the recommendations is to ‘exclude unreliable data’. Although this may increase the effect size found in the remaining data, it can also unjustifiably bias the estimates when it is unknown what the cause of the unreliability is. In such cases, it is better to embrace the variability and use it to characterize the population: variability is also informative. Modern analytical techniques can be used to deal with variability and with missing data. No data should be left behind!HighlightsVariability and individual differences are the bread and butter of developmental science.Discarding variable/unreliable data carries the risk of biasing effect size estimates.Variable and missing data can be dealt with appropriately with modern analytical approaches.Byers‐Heinlein et al. (2021) argue that lack of reliability hinders progress in infant research and they provide recommendations for improving reliability. This is indeed much needed: better measurement instruments, more data/trials per infant, and better reporting of the psychometric properties of measurement instruments will improve inference in individual studies and in our field as a whole. One recommendation however—‘exclude low quality data from analysis’—is risky and unnecessary.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call