Abstract

ABSTRACTSome sorts of “evidence” in evidence-based practice seem to carry more weight (e.g., randomized controlled trials; RCTs) than others (e.g., case studies) in applied sport and exercise psychology research. In this article we explore some of the shibboleths of evidence-based treatment, and how some “gold standards,” such as RCTs (as they are often used or misused) may, when sub-optimally executed, provide only tenuous, incomplete, and confounded evidence for what we choose to do in practice. We inquire into the relevance and meaningfulness of practitioner-evacuated research and investigations that use flawed statistical reasoning, and we also ask a central question in evaluating evidence: just because some sorts of positive changes can be measured and counted in various treatment outcome research, do they really “count?”

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.