Introduction: The Gap–Kalamazoo Communication Skills Assessment Form (GKCSAF) is widely used in medical education, yet its reliability in real occupational therapy clinical settings remains unexplored. This study aimed to assess the intra-rater and inter-rater reliability, as well as random measurement error, of the GKCSAF in occupational therapy. Method: Five independent raters evaluated audio-recordings and transcripts of conversations involving 30 patients treated by 22 assessors (7 therapists and 15 students). Both direct and coded ratings were used. Results: For direct ratings, intra-rater reliability was moderate for total score (intraclass correlation coefficient (ICC) = 0.76), but poor for inter-rater (ICC = 0.31). minimal detectable change (MDC%) was acceptable for the same rater (17.8%) but not for different raters (38.3%). Weighted kappa values indicated poor to fair reliability (−0.01 to 0.34) for each domain score. Coded ratings showed moderate intra-rater reliability (ICC = 0.69) and poor inter-rater reliability (ICC = 0.22). MDC% was acceptable for the same rater (24.8%) but not for different raters (65.5%). Weighted kappa values indicated poor to fair reliability (−0.02 to 0.33) for each domain score. Conclusion: GKCSAF displays acceptable intra-rater but poor inter-rater reliability in occupational therapy clinical scenarios. Multiple raters are advised for enhanced reliability, while coding might not significantly enhance it. It is advisable to use the GKCSAF cautiously in occupational therapy education, ensuring adequate training, and possibly incorporating multiple raters for assessment consistency.
Read full abstract