Abstract

Problems can arise when researchers try to assess the statistical significance ofmore than 1 test in a study. In a single test, statistical significance isoftendeterminedbasedonanobservedeffector finding that is unlikely (<5%) to occur due to chance alone.Whenmore than 1 comparison is made, the chance of falsely detecting a nonexistent effect increases. This is known as the problem of multiple comparisons (MCs), andadjustments canbemade in statistical testing to account for this.1 In this issue of JAMA, Saitz et al2 report results of a randomized trial evaluating the efficacy of 2brief counseling interventions (ie, a brief negotiated interview and an adaptation of a motivational interview, referred to asMOTIV) in reducing drug use in primary care patients when compared with not having an intervention. Because MCs were made, the authors adjusted how they determined statistical significance. In this article,weexplainwhyadjustmentforMCs isappropriate inthisstudy and point out the limitations, interpretations, and cautions when using these adjustments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call