Abstract

Simulator training for image-guided surgical interventions allows tracking task performance in terms of speed and precision of task execution. Simulator tasks are more or less realistic with respect to real surgical tasks, and the lack of clear criteria for learning curves and individual skill assessment if more often than not a problem. Recent research has shown that trainees frequently focus on getting faster at the simulator task, and this strategy bias often compromises the evolution of their precision score. As a consequence, and whatever the degree of surgical realism of the simulator task, the first and most critical criterion for skill evolution should be task precision, not the time of task execution. This short opinion paper argues that individual training statistics of novices from a simulator task should therefore always be compared with the statistics of an expert surgeon from the same task. This implies that benchmark statistics from the expert are made available and an objective criterion, i.e. a parameter measure, for task precision is considered for assessing learning curves of novices.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.