Abstract
AbstractEvaluation capacity building (ECB) is still an emerging area of study in the field of evaluation. The purpose of ECB is to assist program practitioners with implementing higher‐quality evaluation; however, we need better tools and resources to effectively assess ECB efforts. Existing measures typically depend on self‐report as opposed to assessing the artifacts of ECB training. Among the few non‐self‐report tools that support the assessment of ECB efforts are Relational Systems Evaluation rubrics designed to evaluate logic models, pathway models, and evaluation plans. These rubrics were first developed and tested several years ago. The purpose of the current study is to update the Relational Systems Evaluation rubrics to reflect current ECB knowledge. The updated rubrics have good to excellent inter‐rater reliability and high internal consistency. The results of this study contribute to the ECB field by providing measurement tools for assessing the quality of ECB artifacts. The rubrics can also be used by organizations and funders who need a systematic approach for assessing (and comparing) the quality of evaluation plans and visual theory of change models (e.g., logic models).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.