BackgroundIntegrating behavioral health services in primary care is challenging; a toolkit approach to practice implementation can help. A recent comparative effectiveness randomized clinical trial examined the impact of a toolkit for improving integration on outcomes for patients with multiple chronic conditions. Some aspects of behavioral health integration improved; patient-reported outcomes did not. This report evaluates the implementation strategy (Toolkit) using Proctor’s (2011) implementation outcomes model.MethodsUsing data from the 20 practices randomized to the active (toolkit strategy) arm (education, redesign workbooks, online learning community, remote coaching), we identified 23 measures from practice member surveys, coach interviews, reports, and field logs to assess Toolkit acceptability, appropriateness, feasibility, and fidelity. A practice survey score was high (met expectations) if its average was ≥ 4 on a scale 1-5; all other data were coded dichotomously, with high = 1.ResultsRegarding acceptability, 74% (14) of practices had high scores for willingness of providers and staff to use the Toolkit and 68% (13) for quality improvement teams liking the Toolkit. For appropriateness, 95% (19) of practices had high scores for the structured process being a good match and 63% (12) for the Toolkit being a good match. Feasibility, measured by Toolkit prerequisites, was scored lower by site members at project end (e.g., provider leader available as champion: 53% of practices) compared to remote coaches observing practice teams (74%). For “do-ability,” coaches rated feasibility lower for practices (e.g., completion of workbook activities: 32%) than the practice teams (68%). Fidelity was low as assessed across seven measures, with 50% to 78% of practices having high scores across the seven measures.ConclusionsExisting data from large trials can be used to describe implementation outcomes. The Toolkit was not implemented with fidelity in at least one quarter of the sites, despite being acceptable and appropriate, possibly due to low feasibility in the form of unmet prerequisites and Toolkit complexity. Variability in fidelity reflects the importance of implementation strategies that fit each organization, suggesting that further study on contextual factors and use of the Toolkit, as well as the relationship of Toolkit use and study outcomes, is needed.Trial registrationClinicalTrials.gov NCT02868983; date of registration: 08/15/2016.
Read full abstract