Objective: This study creates linear and generalized additive models (GAMs) of video-recorded two-dimensional hand motion (synonymously referred to as hand movements or hand kinematics) to predict expert-rated performance along a series of surgical motion scales. Background: Surgical performance assessments are costly and time consuming. Automatically quantifying hand motion may offload some burden of surgical coaching and intervention by automatically collecting features of psychomotor performance. Methods: Five experts rated anonymized video clips of benchtop suturing and tying tasks (n = 219) along four visual-analog (0-10) performance scales: fluidity of motion, motion economy, tissue handling, and hand coordination. Custom software tracked both participant hands across successive video frames and populated a robust feature set to train a series of predictive models to reproduce the expert ratings. Results: A GAM (which accounts for nonlinear effects) predicted fluidity of motion ratings with slope = 0.71, intercept = 1.98, and R <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> = 0.77 for clinicians of different experience levels. Fluidity of motion and motion economy models outperformed those created to predict hand coordination and tissue handling ratings. Conclusions: Hand motion tracking may not address all contextual features of surgical tasks. Future work will explore how well simulation-based models extrapolate to more dynamic settings of the operating room.
Read full abstract