Abstract

The computation of anatomical information and laparoscope position is a fundamental block of surgical navigation in Minimally Invasive Surgery (MIS). Recovering a dense 3D structure of surgical scene using visual cues remains a challenge, and the online laparoscopic tracking primarily relies on external sensors, which increases system complexity. Here, we propose a learning-driven framework, in which an image-guided laparoscopic localization with 3D reconstructions of anatomical structures is obtained. To reconstruct the structure of the whole surgical environment, we first fine-tune a learning-based stereoscopic depth perception method, which is robust to texture-less and variant soft tissues, for depth estimation. Then, we develop a dense reconstruction algorithm to represent the scene by surfels, estimate the laparoscope poses and fuse the depth into a unified reference coordinate for tissue reconstruction. To estimate poses of new laparoscope views, we achieve a coarse-to-fine localization method, which incorporates our reconstructed model. We evaluate the reconstruction method and the localization module on three datasets, namely, the stereo correspondence and reconstruction of endoscopic data (SCARED), the ex-vivo data collected with Universal Robot (UR) and Karl Storz Laparoscope, and the in-vivo DaVinci robotic surgery dataset, where the reconstructed structures have rich details of surface texture with an error under 1.71 mm and the localization module can accurately track the laparoscope with images as input. Experimental results demonstrate the superior performance of the proposed method in anatomy reconstruction and laparoscopic localization. The proposed framework can be potentially extended to the current surgical navigation system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call