Abstract

In image-guided surgical tasks, the precision and timing of hand movements depend on the effectiveness of visual cues relative to specific target areas in the surgeons peri-personal space. Two-dimensional (2D) image views of real-world movements are known to negatively affect both constrained (with tool) and unconstrained(no tool) hand movements compared with direct action viewing. Task conditions where virtual 3D would generate and advantage for surgical eye-hand coordination are unclear. Here, we compared effects of 2D and 3D image views on the precision and timing of surgical hand movement trajectories in a simulator environment. Eight novices had to pick and place a small cube on target areas across different trajectory segments in the surgeons peri-personal space, with the dominant hand, with and without a tool, under conditions of: (1) direct (2) 2D fisheye camera and (3) virtual 3D viewing (headmounted). Significant effects of the location of trajectories in the surgeons peri-personal space on movement times and precision were found. Subjects were faster and more precise across specific target locations, depending on the viewing modality.

Highlights

  • Image-guided hand-tool movements, as in laparoscopic surgery, constrain the surgeon to process critical information about what his/her hands are doing in a real-world environment while looking at a two-dimensional (2D) or three-dimensional (3D) representation of that environment displayed on a monitor

  • We show results from psychophysical experiments on novice surgeons in our surgical simulator environment EXCALIBUR, designed for studying the speed and the precision of hand movements and handtool operations under conditions of 2D and 3D image guidance

  • The variations in shape of the sampled real-world trajectories for constrained and unconstrained movements in the surgeon’s peri-personal space suggest complex effects of the type of visual feed-back given, type of object movement to be realized, and target position or eccentricity on the real-world action field (RAF), which is consistent with results from previous work on constrained and unconstrained hand movements, introduced here above

Read more

Summary

Introduction

Image-guided hand-tool movements, as in laparoscopic surgery, constrain the surgeon to process critical information about what his/her hands are doing in a real-world environment while looking at a two-dimensional (2D) or three-dimensional (3D) representation of that environment displayed on a monitor. Veridical information about real-world depth is missing from the image representations, and the surgeon is looking sideways or straight ahead at a monitor instead of looking down directly at his hands in the scene of intervention This lack of direct visual feed-back incurs measurable costs in terms of reduced comfort during task execution, longer times of intervention, or lesser precision, as previous research has clearly shown (Batmaz, de Mathelin, & Dresp-Langley, 2016a, 2016b, 2017; Gallagher, Ritter, Lederman, McClusky, & Smith, 2005; Huber, Taffinder, Russell, & Darzi, 2003; Wilson et al, 2011). All subjects were novices with above average spatial ability, as is required for surgical practice

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.