Abstract

Oral and maxillofacial surgery currently relies on virtual surgery planning based on image data (CT, MRI). Three-dimensional (3D) visualizations are typically used to plan and predict the outcome of complex surgical procedures. To translate the virtual surgical plan to the operating room, it is either converted into physical 3D-printed guides or directly translated using real-time navigation systems. This study aims to improve the translation of the virtual surgery plan to a surgical procedure, such as oncologic or trauma surgery, in terms of accuracy and speed. Here we report an augmented reality visualization technique for image-guided surgery. It describes how surgeons can visualize and interact with the virtual surgery plan and navigation data while in the operating room. The user friendliness and usability is objectified by a formal user study that compared our augmented reality assisted technique to the gold standard setup of a perioperative navigation system (Brainlab). Moreover, accuracy of typical navigation tasks as reaching landmarks and following trajectories is compared. Overall completion time of navigation tasks was 1.71 times faster using augmented reality (P=.034). Accuracy improved significantly using augmented reality (P < .001), for reaching physical landmarks a less strong correlation was found (P=.087). Although the participants were relatively unfamiliar with VR/AR (rated 2.25/5) and gesture-based interaction (rated 2/5), they reported that navigation tasks become easier to perform using augmented reality (difficulty Brainlab rated 3.25/5, HoloLens 2.4/5). The proposed workflow can be used in a wide range of image-guided surgery procedures as an addition to existing verified image guidance systems. Results of this user study imply that our technique enables typical navigation tasks to be performed faster and more accurately compared to the current gold standard. In addition, qualitative feedback on our augmented reality assisted technique was more positive compared to the standard setup.?>.

Highlights

  • Oral and maxillofacial surgery currently relies on virtual surgery planning based on image data (CT, magnetic resonance imaging (MRI))

  • This section describes the design of the augmented reality visualization system

  • Twelve male members from our local university hospital participated in the study and completed all the predefined navigation tasks in mutual randomized order on both systems, using Brainlab with or without the HoloLens

Read more

Summary

Introduction

Oral and maxillofacial surgery currently relies on virtual surgery planning based on image data (CT, MRI). Three-dimensional (3D) visualizations are typically used to plan and predict the outcome of complex surgical procedures. To translate the virtual surgical plan to the operating room, it is either converted into physical 3D-printed guides or directly translated using real-time navigation systems. Purpose: This study aims to improve the translation of the virtual surgery plan to a surgical procedure, such as oncologic or trauma surgery, in terms of accuracy and speed. We report an augmented reality visualization technique for image-guided surgery. It describes how surgeons can visualize and interact with the virtual surgery plan and navigation data while in the operating room. The user friendliness and usability is objectified by a formal user study that compared our augmented reality assisted technique to the gold standard setup of a perioperative navigation system (Brainlab). Accuracy of typical navigation tasks as reaching landmarks and following trajectories is compared

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call