Abstract

Laparoscopic surgery provides for patients such advantages as a small incision range and quick postoperative recovery. Unfortunately, surgeons struggle to grasp 3D spatial relationships in the abdominal cavity. Methods have been proposed to present the 3D information of the abdominal cavity using AR or VR. Although 3D geometrical information is crucial to perform such methods, it is difficult to reconstruct dense 3D organ shapes using a feature-point-based 3D reconstruction method such as structure from motion (SfM) due to the appearance characteristics of organs (e.g., texture-less and glossy). Our research solves this problem by estimating depth information from laparoscopic images using deep learning. We constructed a training dataset from both RGB and depth images with an RGB-D camera, implemented a depth image generator by applying a generative adversarial network (GAN), and generated a depth image from a single-shot RGB image. By calibration with a laparoscopic camera and an RGB-D camera, the laparoscopic image was transformed to an RGB image. We generated depth images by inputting the transformed laparoscopic images into a GAN generator. The scale parameter of the depth image with real-world dimensions was calculated by comparing the depth value and the 3D information estimated by SfM. Consequently, the density of the organ model increased by back-projecting the depth image to the 3D space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call