Abstract

Abstract Objective Sample size calculations play a central role in risk-factor study design because sample size affects study interpretability, costs, hospital resources and staff time. We demonstrate the consequences of using misclassified control groups on the power of risk association tests, with the intent of showing that control groups with even small misclassification rates can reduce the power of association tests. So, sample size calculations that ignore misclassifications may underpower studies. Study Design This was a simulation study using study designs from published orthopaedic risk-factor studies. The approach was to use their designs but simulate the data to include known proportions of misclassified affected subjects in the control group. The simulated data were used to calculate the power of a risk-association test. We calculated powers for several study designs and misclassification rates and compared them to a reference model. Results Treating unlabelled data as disease-negative only always reduced statistical power compared with the reference power, and power loss increased with increasing misclassification rate. For this study, power could be improved back to 80% by increasing the sample size by a factor of 1.1 to 1.4. Conclusion Researchers should use caution in calculating sample sizes for risk-factor studies and consider adjustments for estimated misclassification rates.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.