Abstract

Prevention science has increasingly turned to integrative data analysis (IDA) to combine individual participant-level data from multiple studies of the same topic, allowing us to evaluate overall effect size, test and model heterogeneity, and examine mediation. Studies included in IDA often use different measures for the same construct, leading to sparse datasets. We introduce a graph theory method for summarizing patterns of sparseness and use simulations to explore the impact of different patterns on measurement bias within three different measurement models: a single common factor, a hierarchical model, and a bifactor model. We simulated 1000 datasets with varying levels of sparseness and used Bayesian methods to estimate model parameters and evaluate bias. Results clarified that bias due to sparseness will depend on the strength of the general factor, the measurement model employed, and the level of indirect linkage among measures. We provide an example using a synthesis dataset that combined data on youth depression from 4146 youth who participated in 16 randomized field trials of prevention programs. Given that different synthesis datasets will embody different patterns of sparseness, we conclude by recommending that investigators use simulation methods to explore the potential for bias given the sparseness patterns they encounter.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.