Abstract

Synergistic fusion of pre-operative (pre-op) and intraoperative (intra-op) imaging data provides surgeons with invaluable insightful information that can improve their decision-making during minimally invasive robotic surgery. In this paper, we propose an efficient technique to segment multiple objects in intra-op multi-view endoscopic videos based on priors captured from pre-op data. Our approach leverages information from 3D pre-op data into the analysis of visual cues in the 2D intra-op data by formulating the problem as one of finding the 3D pose and non-rigid deformations of tissue models driven by features from 2D images. We present a closed-form solution for our formulation and demonstrate how it allows for the inclusion of laparoscopic camera motion model. Our efficient method runs in real-time on a single core CPU making it practical even for robotic surgery systems with limited computational resources. We validate the utility of our technique on ex vivo data as well as in vivo clinical data from laparoscopic partial nephrectomy surgery and demonstrate its robustness in segmenting stereo endoscopic videos.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call