Abstract

Current surgical augmented reality (AR) systems typically employ an on-demand display behavior, where the surgeon can toggle the AR on or off using a switch. The need to be able to turn the AR off is in part due to the obstructing nature of AR overlays, potentially hiding important information from the surgeon in order to provide see-through vision. This on-demand paradigm is inefficient as the surgeon is always in one of two sub-optimal states: either they do not benefit at all from the image guidance (AR off ) or the field of view is partially obstructed (AR on ). Additionally, frequent toggling between the two views during the operation can be disruptive for the surgeon. This letter presents a novel approach to automatically adapt the AR display view based on the context of the surgical scene. Using gaze tracking in conjunction with information from the surgical instruments and the registered anatomy, a multi Gaussian process model can be trained to infer the desired AR display view at any point during the procedure. Furthermore, a new AR display view is introduced in this model, taking advantage of the context information to only display a partial view of the AR when relevant. To validate the presented approach, a detailed simulation of a neurosurgical tumor contour marking task is designed. A study conducted with 15 participants demonstrates the usefulness of the proposed approach, showing a statistically significant mean reduction of 48% in the average time necessary for the detection of simulated bleeding, as well as statistically significant improvements in total task time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call