Abstract

AbstractDriven by the demand for evidence of development effectiveness, the field of mobile learning for development (ML4D) has recently begun to adopt rigorous evaluation methods. Using the findings of an ongoing systematic review of ML4D interventions, this paper critically assesses the value proposition of rigorous impact evaluations in ML4D. While a drive towards more reliable evidence of mobile learning’s effectiveness as a development intervention is welcome, the maturity of the field, which continues to be characterised by pilot programmes rather than well-established and self-sustaining interventions, questions the utility of rigorous evaluation designs. The experiences of conducting rigorous evaluations of ML4D interventions have been mixed, and the paper concludes that in many cases the absence of an explicit programme theory negates the effectiveness of carefully designed impact evaluations. Mixed-methods evaluations are presented as a more relevant evaluation approach in the context of ML4D.Keywordsmobile learningdevelopment effectivenessML4Ddevelopingcountry educationimpact evaluation

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call