Abstract

When used appropriately, non-significant p-values have the potential to further our understanding of what does not work in education, and why. When misinterpreted, they can trigger misguided conclusions, for example about the absence of an effect of an educational intervention, or about a difference in the efficacy of different interventions. We examined the frequency of non-significant p-values in recent volumes of peer-reviewed educational research journals. We also examined how frequently researchers misinterpret non-significance to imply the absence of an effect, or a difference to another significant effect. Within a random sample of 50 peer-reviewed articles, we found that of 528 statistically tested hypotheses, 253 (48%) were non-significant. Of these, 142 (56%) were erroneously interpreted to indicate the absence of an effect, and 59 (23%) to indicate a difference to another significant effect. For 97 (38%) of non-significant results, such misinterpretations were linked to potentially misguided implications for educational theory, practice, or policy. We outline valid ways for dealing with non-significant p-values to improve their utility for education, discussing potential reasons for these misinterpretations and implications for research.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.