Abstract

Introduction: Multiple competency assessment tools exist for CVC insertion, but there is no standardized approach for use. A direct observation, competency checklist tool (CT) for pediatric CVC placement was developed and validity evidence obtained in simulated encounters but not clinical environments. This study aimed to evaluate how the CT performs in a clinical environment and compare it with a global rating scale (GRS) and an entrustment scale (ES), two competency assessments commonly used. Methods: 12-month (7/21-6/22), prospective, multi-site, observational study. With each CVC inserted, supervisors completed 3 assessments of competency (1. CT: 15 steps with dichotomous (Complete/Not Complete) options; 2. GRS: a dichotomous (Competent/Not Competent) assessment; 3. ES: 5-level supervision rating from ‘observe only’ to ‘able to perform procedure unsupervised’). CT inter-item reliability was evaluated with Cronbach’s alpha. CT, GRS, and ES assessments were compared using partial correlations adjusting for clustering of observations in fellows and fellows in programs. Mixed-effects linear, ordinal, and logistic regressions were used to model the effect of fellow year on scales. Results: 17 sites participated with 339 CVC insertions completed by 116 unique fellows. Overall, the CT had an acceptable internal reliability (alpha=0.66 [95% CI 0.6-0.71]). The CT had minimal correlation with the other assessments (GRS R=0.38, ES R=0.29). GRS and ES (R=0.54) were moderately correlated. For the CT, 2nd year fellows had higher scores than 1st year fellows (0.29 [0.08-0.51] points, p=0.008). 3rd year fellows did not have higher scores than 2nd year fellows (p=0.243). For GRS and ES, trainees had higher competency scores with each year of training (year 1 to 2 [(p< 0.001] and year 2 to 3 [p=0.009]). Conclusions: The checklist tool performed acceptably during real-world competency assessment and correlated with fellow year of training from year 1 to 2 but not year 2 to 3, reflecting its strength in assessing competency for novice trainees. Current assessment tools (ES and GRS) did not correlate well with the CT; different competency assessment tools may serve different purposes based on level of training. Further evaluation of these assessment tools is warranted to determine optimal approach to competency assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call