Abstract

Why do evaluations of eHealth programs fail? An alternative set of guiding principles.

Highlights

  • Much has been written about why electronic health initiatives fail [1,2,3,4]

  • N We argue that the assumptions, methods, and study designs of experimental science, whilst useful in many contexts, may be ill-suited to the particular challenges of evaluating electronic health (eHealth) programs, especially in politicised situations where goals and success criteria are contested

  • N We offer an alternative set of guiding principles for eHealth evaluation based on traditions that view evaluation as social practice rather than as scientific testing, and illustrate these with the example of England’s controversial Summary Care Record program

Read more

Summary

Introduction

Much has been written about why electronic health (eHealth) initiatives fail [1,2,3,4]. MacDonald and Kushner identify three forms of evaluation of government-sponsored programs: bureaucratic, autocratic, and democratic, which represent different levels of independence from the state [27] Using this taxonomy, the approach endorsed by the previous PLoS Medicine series [5,6,7] represents a welcome shift from a bureaucratic model (in which management consultants were commissioned to produce evaluations that directly served political ends) to an autocratic model (in which academic experts use systematic methods to produce objective reports that are published independently). Over-arching principle of statistical inference (relating the sample to the population)

Principle of critical questioning
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call