Abstract

1.Recognize important characteristics of different study designs, and how to evaluate the strength of a study based upon the characteristics, the evidence hierarchy, and how it can be efficiently applied, as well as its limitations.2.Discuss indices (mean, median, and mode) and intervals (variability, standard deviation, standard error of the means, and confidence intervals), effect size through association statistics, and regression models.3.Assess the strength of evidence and important characteristics of a journal club. Clinicians are inundated with research, with results presented in statistical terms and frequently without guidance as to how to interpret findings. How do we interpret these studies most efficiently? How do we make the best use of a journal club? Study designs are quasi-experimental (eg, observational) or experimental (randomized); each approach has advantages and disadvantages. The study design should match the study purpose. Important study elements include participant eligibility, setting, intervention(s), and study procedures (eg, randomization, masking). Evidence is considered within a hierarchy of study designs ranked to limit bias and improve reliability. Standard approaches to grading studies and results exist. Systematic reviews and meta-analysis present synthesized information across studies; secondary research like cost-benefit analyses model additional outcomes of synthesized results. Statistics consist of variables, parameters, indices, and intervals. Parametric analyses require normal population distributions. Normal distribution is checked by the closeness of means to median and standard deviation to means; when widely disparate, sample distribution is skewed and non-parametric statistical methods are needed. Small populations (< 30) are not normally distributed. Probability (P-values) is used for null hypothesis testing; P-values alone are inadequate for statistical significance. P-values should include intervals (ie, standard error, deviation, and confidence intervals); intervals give precision to results and strength of evidence. Observational studies examine associations between exposure and event; outcomes are in terms of ratios of risks, rates, or odds. Associations (Pearson or Spearman) or predictive (regression) models gauge effect size to outcomes. Multivariable regression models identify predictors of outcomes. Study design, descriptive statistics, effect size, intervals, and trial design are important to critically reading literature. Journal clubs are important for team learning and trainee education. Questions of clinical relevance, trial design, strength of evidence, and corollary evidence are central to journal discussions and practice guidelines. This preconference workshop introduces clinicians to factors that improve understanding of research. Structure and Processes of Care

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.