Abstract

New robot-assisted surgery platforms being developed will be required to have proficiency-based simulation training available. Scoring methodologies and performance feedback for trainees are currently not consistent across all robotic simulator platforms. Also, there are virtually no prior publications on how VR simulation passing benchmarks have been established. This paper compares methods evaluated to determine the proficiency-based scoring thresholds (a.k.a. benchmarks) for the new Medtronic Hugo™ RAS robotic simulator. Nine experienced robotic surgeons from multiple disciplines performed the 49 skills exercises 5 times each. The data were analyzed in 3 different ways: (1) include all data collected, (2) exclude first sessions, (3) exclude outliers. Eliminating the first session discounts becoming familiar with the exercise. Discounting outliers allows removal of potentially erroneous data that may be due to technical issues, unexpected distractions, etc. Outliers were identified using a common statistical technique involving the interquartile range of the data. Using each method above, mean and standard deviations were calculated, and the benchmark was set at a value of 1 standard deviation above the mean. In comparison to including all the data, when outliers are excluded, fewer data points are removed than just excluding first sessions, and the metric benchmarks are made more difficult by an average of 11%. When first sessions are excluded, the metric benchmarks are made easier by an average of about 2%. In comparison with benchmarks calculated using all data points, excluding outliers resulted in the biggest change making the benchmarks more challenging. We determined that this method provided the best representation of the data. These benchmarks should be validated with future clinical training studies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call