Abstract

Much of the evidential basis for recent policy decisions is grounded in effect size: the standardised mean difference in outcome scores between a study's intervention and comparison groups. This is interpreted as measuring educational influence, importance or effectiveness of the intervention. This article shows this is a category error at two levels. At the individual study level, the intervention plays only a partial role in effect size, so treating effect size as a measure of the intervention is a mistake. At the meta‐analytic level, the assumptions needed for a valid comparison of the relative effectiveness of interventions on the basis of relative effect size are absurd. While effect size continues to have a role in research design, as a measure of the clarity of a study, policy makers should recognise the lack of a valid role for it in practical decision‐making.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.