Abstract
ABSTRACT Increasing coverage of evidence-based maternal, neonatal, child, reproductive health and nutrition (MNCRHN) programs in low- and middle-income countries has coincided with dramatic improvements in health despite variable quality of implementation. Comprehensive evaluation to inform program improvement requires standardized but adaptable tools, which the Real Accountability, Data Analysis for Results (RADAR) project has developed. To inform selection of tools and methods packages (‘packages’) to measure program quality of care (QoC), we documented experiences testing the packages, which were developed and adapted based on global and local expertise, and pre- and pilot-testing. We conducted cross-sectional studies in 2018–2019 on the quality of 1) integrated community case management, 2) counseling on maternal, infant, and young child feeding, 3) intrapartum care, and 4) family planning counseling in Mali, Mozambique, Tanzania, and Malawi. Herein we describe package performance and highlight experiences that inform their selection and use. Direct observation packages provided high-quality, immediately applicable results but they required specialized expertise, in-person collection, adequate patient volume, reasonable wait times, and unambiguously ‘correct’ provision of care. General satisfaction questions from exit interview packages produced unvaryingly positive responses despite variable observed quality of care. Variation increased when questions were more targeted, but findings on caregiver and client’s recall of recommendations were more actionable. When interactive, clinical vignettes can capture knowledge of clinical care. But for conditions that can be simulated, like provision of family planning counseling, we could capture provider practice from simulated clients. Clinicians could more easily demonstrate tactile aspects of intrapartum care using observed structured clinical examinations, but this method required storage and transport of the required mannequins. Based on our findings we recommend ten questions upon which evaluators can base package selection. Findings from these packages inform programs and, in the context of comprehensive program evaluation enable us to link programs with impact.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.