Abstract
Evaluators of family planning programs have begun to use simulated client ratings to assess the quality of services. However, little is known about the reliability of such ratings when they are used to assess individual provider performance. This study examined the reliability of quality-of-care ratings in a Peruvian community-based distribution program by using pairs of concealed observers--a simulated client and a companion. Average interrater agreement, measured by intraclass correlation, was .50, indicating that ratings are not reliable enough for the evaluation of a single provider by a single rater. The study results suggest that checklist-item scores referring to specific provider behaviors will be more reliable and useful than ratings.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.