Abstract

sBackgroundPilot/feasibility or studies with small sample sizes may be associated with inflated effects. This study explores the vibration of effect sizes (VoE) in meta-analyses when considering different inclusion criteria based upon sample size or pilot/feasibility status.MethodsSearches were to identify systematic reviews that conducted meta-analyses of behavioral interventions on topics related to the prevention/treatment of childhood obesity from January 2016 to October 2019. The computed summary effect sizes (ES) were extracted from each meta-analysis. Individual studies included in the meta-analyses were classified into one of the following four categories: self-identified pilot/feasibility studies or based upon sample size but not a pilot/feasibility study (N ≤ 100, N > 100, and N > 370 the upper 75th of sample size). The VoE was defined as the absolute difference (ABS) between the re-estimations of summary ES restricted to study classifications compared to the originally reported summary ES. Concordance (kappa) of statistical significance of summary ES between the four categories of studies was assessed. Fixed and random effects models and meta-regressions were estimated. Three case studies are presented to illustrate the impact of including pilot/feasibility and N ≤ 100 studies on the estimated summary ES.ResultsA total of 1602 effect sizes, representing 145 reported summary ES, were extracted from 48 meta-analyses containing 603 unique studies (avg. 22 studies per meta-analysis, range 2–108) and included 227,217 participants. Pilot/feasibility and N ≤ 100 studies comprised 22% (0–58%) and 21% (0–83%) of studies included in the meta-analyses. Meta-regression indicated the ABS between the re-estimated and original summary ES where summary ES ranged from 0.20 to 0.46 depending on the proportion of studies comprising the original ES were either mostly small (e.g., N ≤ 100) or mostly large (N > 370). Concordance was low when removing both pilot/feasibility and N ≤ 100 studies (kappa = 0.53) and restricting analyses only to the largest studies (N > 370, kappa = 0.35), with 20% and 26% of the originally reported statistically significant ES rendered non-significant. Reanalysis of the three case study meta-analyses resulted in the re-estimated ES rendered either non-significant or half of the originally reported ES.ConclusionsWhen meta-analyses of behavioral interventions include a substantial proportion of both pilot/feasibility and N ≤ 100 studies, summary ES can be affected markedly and should be interpreted with caution.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.