Abstract

The results of many large-scale federal or multi-site evaluations are typically compiled into long reports which end up sitting on policymaker’s shelves. Moreover, the information policymakers need from these reports is often buried in the report, may not be remembered, understood, or readily accessible to the policymaker when it is needed. This is not a new challenge for evaluators, and advances in statistical methodology, while they have created greater opportunities for insight, may compound the challenge by creating multiple lenses through which evidence can be viewed. The descriptive evidence from traditional frequentist models, while familiar, are frequently misunderstood, while newer Bayesian methods provide evidence which is intuitive, but less familiar. These methods are complementary but presenting both increases the amount of evidence stakeholders and policymakers may find useful. In response to these challenges, we developed an interactive dashboard that synthesizes quantitative and qualitative data and allows users to access the evidence they want, when they want it, allowing each user a customized, and customizable view into the data collected for one large-scale federal evaluation. This offers the opportunity for policymakers to select the specifics that are most relevant to them at any moment, and also apply their own risk tolerance to the probabilities of various outcomes.

Highlights

  • The results of many large-scale federal or multi-site evaluations are typically compiled into long reports which end up sitting on policymakers’ shelves

  • Bayesian methods incorporate evidence in the form of priors to estimate probability distributions which are predictive of future performance. While these methods are complementary, presenting both increases the amount of evidence stakeholders and policymakers may find useful. In response to these challenges, we developed an interactive dashboard that synthesizes quantitative and qualitative data for a large-scale federal meta-evaluation, allowing users to access a customized view of the evidence they want, when they want it

  • Total cost of care was measured as dollars per beneficiary per quarter, and the three utilization measures as visits/events per 1000 beneficiaries per quarter

Read more

Summary

Introduction

The results of many large-scale federal or multi-site evaluations are typically compiled into long reports which end up sitting on policymakers’ shelves. The goal of any large-scale evaluation is to produce actionable evidence, making maximal use of the information gathered [1]. This may involve both scans across implementation settings, as well as deep-dives into individual sites’ experiences and outcomes. While these methods are complementary, presenting both increases the amount of evidence stakeholders and policymakers may find useful In response to these challenges, we developed an interactive dashboard that synthesizes quantitative and qualitative data for a large-scale federal meta-evaluation, allowing users to access a customized view of the evidence they want, when they want it. Evaluations were coordinated so that common outcomes and measures were used across the awardees

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.