Abstract

BackgroundUnderstanding the context of a health programme is important in interpreting evaluation findings and in considering the external validity for other settings. Public health researchers can be imprecise and inconsistent in their usage of the word “context” and its application to their work. This paper presents an approach to defining context, to capturing relevant contextual information and to using such information to help interpret findings from the perspective of a research group evaluating the effect of diverse innovations on coverage of evidence-based, life-saving interventions for maternal and newborn health in Ethiopia, Nigeria, and India.MethodsWe define “context” as the background environment or setting of any program, and “contextual factors” as those elements of context that could affect implementation of a programme. Through a structured, consultative process, contextual factors were identified while trying to strike a balance between comprehensiveness and feasibility. Thematic areas included demographics and socio-economics, epidemiological profile, health systems and service uptake, infrastructure, education, environment, politics, policy and governance. We outline an approach for capturing and using contextual factors while maximizing use of existing data. Methods include desk reviews, secondary data extraction and key informant interviews. Outputs include databases of contextual factors and summaries of existing maternal and newborn health policies and their implementation. Use of contextual data will be qualitative in nature and may assist in interpreting findings in both quantitative and qualitative aspects of programme evaluation.DiscussionApplying this approach was more resource intensive than expected, in part because routinely available information was not consistently available across settings and more primary data collection was required than anticipated. Data was used only minimally, partly due to a lack of evaluation results that needed further explanation, but also because contextual data was not available for the precise units of analysis or time periods of interest. We would advise others to consider integrating contextual factors within other data collection activities, and to conduct regular reviews of maternal and newborn health policies. This approach and the learnings from its application could help inform the development of guidelines for the collection and use of contextual factors in public health evaluation.

Highlights

  • Methods/design Process outline Informed Decisions for Actions to improve maternal and newborn health (IDEAS) developed research questions to test the Bill and Melinda Gates Foundation theory of change for their maternal, newborn and child health strategy [45]

  • Collection and assessment of these contextual factors are critical to establishing internal and external validity of study findings. Those that are relevant to health programme evaluation are contextual factors that may confound or modify the effect of programmes on the outcome of interest, for large scale programme evaluation where randomised studies are neither feasible nor appropriate [1]

  • Past experiences with capturing contextual data for the Integrated Management of Childhood Illness (IMCI) [1] and Expanded Quality Management Using Information Power (EQUIP) [32] reveal that substantial time and effort can be invested in collecting contextual data but not all of those efforts will yield meaningful information

Read more

Summary

Introduction

Methods/design Process outline Informed Decisions for Actions to improve maternal and newborn health (IDEAS) developed research questions to test the Bill and Melinda Gates Foundation theory of change for their maternal, newborn and child health strategy [45]. Collection and assessment of these contextual factors are critical to establishing internal and external validity of study findings Those that are relevant to health programme evaluation are contextual factors that may confound or modify the effect of programmes on the outcome of interest, for large scale programme evaluation where randomised studies are neither feasible nor appropriate [1]. There is a clear need to make evaluation findings useful to the programme in question, and to provide information on transferability and applicability to inform decisions to scale up the programme or implement it elsewhere [2, 3]

Objectives
Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call