Abstract

ABSTRACTBackground: Group-based parent programmes demonstrate positive benefits for adult and child mental health, and child behaviour outcomes. Greater fidelity to the programme delivery model equates to better outcomes for families attending, however, fidelity is typically self-monitored using programme specific checklists. Self-completed measures are open to bias, and it is difficult to know if positive outcomes found from research studies will be maintained when delivered in regular services. Currently, ongoing objective monitoring of quality is not conducted during usual service delivery. This is odd given that quality of other services is assessed objectively, for example by the Office for Standards in Education, Children's Services and Skills (OFSTED). Independent observations of programme delivery are needed to assess fidelity and quality of delivery to ensure positive outcomes, and therefore justify the expense of programme delivery.Methods: This paper outlines the initial development and reliability of a tool, the Parent Programme Implementation Checklist (PPIC), which was originally developed as a simple, brief and generic observational tool for independent assessment of implementation fidelity of group-based parent programmes. PPIC does not require intensive observer training before application/use. This paper presents initial data obtained during delivery of the Incredible Years BASIC programme across nine localities in England and Wales, United Kingdom (UK).Results: Reasonable levels of inter-rater reliability were achieved across each of the three subscales (Adherence, Quality and Participant Responsiveness) and the overall total score when applying percentage agreements (>70%) and intra-class correlations (ICC) (ICC range between 0.404 and 0.730). Intra-rater reliability (n = 6) was acceptable at the subscale level.Conclusions: We conclude that the PPIC has promise, and with further development could be utilised to assess fidelity of parent group delivery during research trials and standard service delivery. Further development would need to include data from other parent programmes, and testing by non-research staff. The objective assessment of quality of delivery would inform services where improvements could be made.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call