Abstract

The National Science Foundation has communicated the importance of Broadening Participation in Computing (BPC) by including BPC as a required component of proposals across the Computer and Information Science and Engineering core programs. This call to action creates a need for collective understanding of how to demonstrate the impact of BPC initiatives through quality evaluation practices. Through our experience in conducting a large-scale national evaluation of the STARS BPC program, we have noticed three overarching challenges that are specific to evaluating BPC programs: evaluation costs, measuring longitudinal program impact, and capturing credible evidence for collective impact of multi-site programs. The STARS Evaluation Cohort Model addresses these challenges by applying an empowerment evaluation framework, which is uniquely applicable to BPC efforts, and by building evaluation capacity through training workshops and offering tools essential for evaluation fidelity. In this experience report, we share the evaluation framework that supports the model, describe the approach toward BPC evaluation challenges, and discuss the evidence base and lessons learned from our practice. Additionally, we present our approach to sustaining and catalyzing new Cohort participation in our ongoing efforts to develop evaluation capacity that facilitates the understanding and propagation of evidence-based BPC initiatives.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call