Abstract

ABSTRACTRandomised controlled trials (RCTs) are increasingly used to evaluate educational interventions in the UK. However, RCTs remain controversial for some elements of the research community. This paper argues that the widespread use of the term ‘gold standard’ to describe RCTs is problematic, as it implies that other research methods are inferior. The usefulness of RCTs can be greatly enhanced when used in conjunction with implementation-specific measures (e.g. observation tools, attitude/engagement surveys and interviews). The proposal is advanced through case studies of two evaluations. One relates to the development of science subject leader skills and expertise at primary school level and the other to co-operative learning of primary maths. Both evaluations randomised schools to the intervention or the business-as-usual control, and compared impact using subject knowledge tests. Integral to each study was a process evaluation which looked at evidence from classroom practice along with feedback from the teachers and pupils themselves. We contend that this enabled much more holistic and richly interpretative pieces of research. The paper concludes that privilege for particular paradigms should be set aside when designing effective evaluations of educational interventions, and that it is insufficient to ask ‘what works?’ without also asking ‘why?’, ‘where?’ and ‘how?’.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.