Abstract

Evaluation of fundamental surgical skills is invaluable to the training of medical students and junior residents. This study assessed the effectiveness of crowdsourcing nonmedical personnel to evaluate technical proficiency at simulated vessel ligation. Fifteen videos were captured of participants performing vessel ligation using a low-fidelity model (5 attending surgeons and 5 medical students before and after training). These videos were evaluated by nonmedical personnel recruited through Amazon Mechanical Turk, as well as by 3 experienced surgical faculty. Evaluation criteria were based on Objective Structured Assessment of Technical Skills (scale: 5-25). Results were compared using Wilcoxon signed rank-sum and Cronbach's alpha (α). Thirty-two crowd workers evaluated all 15 videos. Crowd workers scored attending surgeon videos significantly higher than pretraining medical student videos (20.5 vs 14.9, p < 0.001), demonstrating construct validity. Across all videos, crowd evaluations were more lenient than expert evaluations (19.1 vs 14.5, p < 0.001). However, average volunteer evaluations correlated more strongly with average expert evaluations (α = 0.95) than the strength of correlation between any 2 individual expert evaluators (α = 0.72-0.88). Combined reimbursement for all workers was $80.00. After adjustments for score inflation, crowdsourced can evaluate surgical fundamentals with excellent validity. This resource is considerably less costly and potentially more reliable than individual expert evaluations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call