Abstract

Many successful piloted programs fail when scaled up to a national level. In Kenya, which has a long history of particularly ineffective implementation after successful pilot programs, the Tusome national literacy program—which receives funding from the United States Agency for International Development—is a national-level scale-up of previous literacy and numeracy programs. We applied a scaling framework (Crouch and DeStefano in Doing reform differently: combining rigor and practicality in implementation and evaluation of system reforms. International development group working paper no. 2017-01, RTI International, Research Triangle Park, NC, 2017. https://www.rti.org/publication/doing-reform-differently-combining-rigor-and-practicality-implementation-and-evaluation) to examine whether Tusome’s implementation was rolled out in ways that would enable government structures and officers to respond effectively to the new program. We found that Tusome was able to clarify expectations for implementation and outcomes nationally using benchmarks for Kiswahili and English learning outcomes, and that these expectations were communicated all the way down to the school level. We noted that the essential program inputs were provided fairly consistently, across the nation. In addition, our analyses showed that Kenya developed functional, if simple, accountability and feedback mechanisms to track performance against benchmark expectations. We also established that the Tusome feedback data were utilized to encourage greater levels of instructional support within Kenya’s county level structures for education quality support. The results indicated that several of the key elements for successful scale-up were therefore put in place. However, we also discovered that Tusome failed to fully exploit the available classroom observational data to better target instructional support. In the context of this scaling framework, the Tusome literacy program’s external evaluation results showed program impacts of 0.6–1.0 standard deviations on English and Kiswahili learning outcomes. The program implemented a functional classroom observational feedback system through existing government systems, although usage of those systems varied widely across Kenya. Classroom visits, even if still falling short of the desired rate, were far more frequent, were focused on instructional quality, and included basic feedback and advice to teachers. These findings are promising with respect to the ability of countries facing quality problems to implement a coherent instructional reform through government systems at scale.

Highlights

  • Several countries have recently begun large-scale educational interventions to respond to low learning outcomes

  • Primary Math and Reading (PRIMR)’s findings indicated that coaches did improve the literacy program and that 15:1 was a more cost-effective school-to-coach ratio than 10:1 (Piper and Zuilkowski 2015); that learning impacts were possible after only 1 year (Piper et al 2014); that the impact of PRIMR was sufficient to reduce the poverty gap (Piper et al 2015a); that performance on reading assessments administered in mother tongues could be improved, even without mother tongue instruction (Piper et al 2016f); that the most cost-effective information and communication technology intervention was tablets for coaches; and that a package of teachers’ guides and learner books was more cost-effective than programs that offered only training without these materials (Piper et al 2018)

  • We found clear evidence that Tusome communicated the Ministry of Education’s benchmarks for literacy outcomes within the design documents and training materials, but more importantly, we found that those benchmarks were widely disseminated, understood, and reinforced

Read more

Summary

Introduction

Several countries have recently begun large-scale educational interventions to respond to low learning outcomes. An increased dependence on causal evidence to justify large-scale implementation strengthens the research base for large programs, the decisions to take these programs to scale have largely been undertaken without a body of literature that examines the barriers to successful large-scale educational implementation or whether the external validity assumptions for these comparisons hold (Bates and Glennerster 2017) This failure to develop robust scale-up literature and practice has allowed the field of educational development to focus heavily on proof of concept, with randomized controlled trial (RCT) studies estimating the program impact of small- or medium-scale interventions in several contexts. To the credit of the Kenyan Ministry of Education, Tusome was designed according to the research evidence collected from PRIMR

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call