Introduction: Establishing inter-rater agreement and reliability ascertains that multiple raters consistently evaluate observed interventions to ensure that clinical research protocols are delivered as intended by the trial protocol. Purpose: Using the Guidelines for Reporting Reliability and Agreement Studies, we (a) exemplified the steps to establish inter-rater reliability and inter-rater agreement on the occupation-based coaching Video Evaluation Tool and (b) evaluated best practices that promoted high inter-rater reliability and inter-rater agreement between blinded raters prior to starting a pilot randomized controlled trial. The randomized controlled trial examined the preliminary effectiveness of occupation-based coaching via telehealth for rural families with children living with type 1 diabetes to improve family quality of life, participation, self-efficacy, and child health outcomes. Method: We created a library of 13 occupation-based coaching videos portraying a range of evaluations, scores, and ratings. The inter-rater agreement and reliability on the occupation-based coaching Video Evaluation Tool were established through the iterations of (a) blinded rater training, (b) data collection using the tool, and (c) statistical analysis using Cohen’s kappa and Cronbach’s alpha. Findings: Occurrence and Non-Occurrence Checklist (κ = 0.881, p < 0.001); “Caregiver Talk” and “Interventionist Talk Analysis” (ICC = 0.991–0.999, p < 0.001); Evidence of Independent Capacity Rating (ICC = 0.867 p = 0.006). Conclusion: Strong inter-rater reliability and inter-rater agreement was established by engaging two blinded raters through multifaceted training, integrating real-life clients and contexts into the instrumentation and training, and precisely defined rubric criteria. By employing such practices, high inter-rater reliability and agreement can be achieved in clinical research involving interventions and instruments that are highly subjective and individualized. To ascertain greater scientific confidence in the intervention effect, developing a multidomain fidelity framework and establishing high inter-rater agreement and reliability in the instruments a priori to implementation of clinical trials are necessary.
Read full abstract