Abstract

Practical significance is an important concept that moves beyond statistical significance and p values. While effect sizes are not synonymous with practical significance, it is a basis for evidence of substantive significance. Investigators should find and report effect sizes whenever possible. To build evidence for practical significance in pharmacy education, three methods are discussed. First, effect sizes can be compared to general interpretation guidelines for practical significance. Second, using the effect sizes, investigators can benchmark by comparing effect sizes to external information from other studies; however, this information is not always available. Where prior data is limited, a third method after determining effect size is for investigators to calculate in their cohort an instrument’s minimally important difference; the effect size could be compared to this minimally important difference, as opposed to a general interpretation guideline. A method to calculate the minimally important difference is described, as well as applications. Regardless, effect sizes must be determined and should be reported in articles; its comparator may vary as evidence for practical significance—so interpretation is key. Reporting effect sizes can enable benchmarking by others in the future and facilitate summaries through meta-analysis. It is clear that reporting evidence of practical significance with effect sizes is needed; simply reporting statistical significance is not enough. After reading this article, readers should be able to explain practical significance, recognize evidence of practical significance in other reports, and carry out their own analysis of practical significance using one or more of the methods described herein.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call