Traditional puncture skills training for refresher doctors faces limitations in effectiveness and efficiency. This study explored the application of generative AI (ChatGPT), templates, and digital imaging to enhance puncture skills training. 90 refresher doctors were enrolled sequentially into 3 groups: traditional training; template and digital imaging training; and ChatGPT, template and digital imaging training. Outcomes included theoretical knowledge, technical skills, and trainee satisfaction measured at baseline, post-training, and 3-month follow-up. The ChatGPT group increased theoretical knowledge scores by 17-21% over traditional training at post-training (81.6 ± 4.56 vs. 69.6 ± 4.58, p < 0.001) and follow-up (86.5 ± 4.08 vs. 71.3 ± 4.83, p < 0.001). It also outperformed template training by 4-5% at post-training (81.6 ± 4.56 vs. 78.5 ± 4.65, p = 0.032) and follow-up (86.5 ± 4.08 vs. 82.7 ± 4.68, p = 0.004). For technical skills, the ChatGPT (4.0 ± 0.32) and template (4.0 ± 0.18) groups showed similar scores at post-training, outperforming traditional training (3.6 ± 0.50) by 11% (p < 0.001). At follow-up, ChatGPT (4.0 ± 0.18) and template (4.0 ± 0.32) still exceeded traditional training (3.8 ± 0.43) by 5% (p = 0.071, p = 0.026). Learning curve analysis revealed fastest knowledge (slope 13.02) and skill (slope 0.62) acquisition for ChatGPT group over template (slope 11.28, 0.38) and traditional (slope 5.17, 0.53). ChatGPT responses showed 100% relevance, 50% completeness, 60% accuracy, with 15.9s response time. For training satisfaction, ChatGPT group had highest scores (4.2 ± 0.73), over template (3.8 ± 0.68) and traditional groups (2.6 ± 0.94) (p < 0.01). Integrating AI, templates and digital imaging significantly improved puncture knowledge and skills over traditional training. Combining technological innovations and AI shows promise for streamlining complex medical competency mastery.
Read full abstract