Abstract

Designs that combine differing forms of data are increasingly used to structure educational evaluation studies, for a variety of reasons. In particular, using combinations of methods can help improve understanding and enable better interpretation of findings from evaluations with a variety of purposes including impact, pilot and scale‐up evaluations, all of which are considered in this paper. The use of logic models as visual representations that lay out the steps from inputs to outcomes of programmes has become widespread as a tool for designing educational evaluations, especially as they have been promoted by policy makers and funders including the Education Endowment Foundation (EEF) in England. Yet, the use of logic models in educational evaluations has not been given due attention as a way of providing robust representation of the intervention being evaluated and for interpreting evaluation findings. The paper reflects on practical and theoretical implications of critical literature on logic models focusing particularly on issues of implementation logic, causal mechanisms, context and complexity. The paper uses two EEF evaluations to illustrate how these issues can be addressed and also to present a new framework for evidence‐based logic models thatdraws out a set of key issues to address in future evaluations that use logic models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.