Abstract

The purpose of this study was to improve surgical scene perception by addressing the challenge of reconstructing highly dynamic surgical scenes. We proposed a novel depth estimation network and a reconstruction framework that combines neural radiance fields to provide more accurate scene information for surgical task automation and AR navigation. We added a spatial pyramid pooling module and a Swin-Transformer module to enhance the robustness of stereo depth estimation. We also improved depth accuracy by adding unique matching constraints from optimal transport. To avoid deformation distortion in highly dynamic scenes, we used neural radiance fields to implicitly represent scenes in the time dimension and optimized them with depth and color information in a learning-based manner. Our experiments on the KITTI and SCARED datasets show that the proposed depth estimation network performs close to the state-of-the-art method on natural images and surpasses the SOTA method on medical images with 1.12% in 3 px Error and 0.45 px in EPE. The proposed dynamic reconstruction framework successfully reconstructed the dynamic cardiac surface on a totally endoscopic coronary artery bypass video, achieving SOTA performance with 27.983dB in PSNR, 0.812 in SSIM, and 0.189 in LPIPS. Our proposed depth estimation network and reconstruction framework provide a significant contribution to the field of surgical scene perception. The framework achieves better results than SOTA methods on medical datasets, reducing mismatches on depth maps and resulting in more accurate depth maps with clearer edges. The proposed ER framework is verified on a series of dynamic cardiac surgical images. Future efforts will focus on improving the training speed and solving the problem of limited field of view.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call