RGB-D cameras in the operating room (OR) provide synchronized views of complex surgical scenes. Assimilation of this multi-view data into a unified representation allows for downstream tasks such as object detection and tracking, pose estimation, and action recognition. Neural radiance fields (NeRFs) can provide continuous representations of complex scenes with limited memory footprint. However, existing NeRF methods perform poorly in real-world OR settings, where a small set of cameras capture the room from entirely different vantage points. In this work, we propose NeRF-OR, a method for 3D reconstruction of dynamic surgical scenes in the OR. Where other methods for sparse-view datasets use either time-of-flight sensor depth or dense depth estimated from color images, NeRF-OR uses a combination of both. The depth estimations mitigate the missing values that occur in sensor depth images due to reflective materials and object boundaries. We propose to supervise with surface normals calculated from the estimated depths, because these are largely scale invariant. We fit NeRF-OR to static surgical scenes in the 4D-OR dataset and show that its representations are geometrically accurate, where state of the art collapses to sub-optimal solutions. Compared to earlier work, NeRF-OR grasps fine scene details while training 30 faster. Additionally, NeRF-OR can capture whole-surgery videos while synthesizing views at intermediate time values with an average PSNR of 24.86dB. Last, we find that our approach has merit in sparse-view settings beyond those in the OR, by benchmarking on the NVS-RGBD dataset that contains as few as three training views. NeRF-OR synthesizes images with a PSNR of 26.72dB, a 1.7% improvement over state of the art. Our results show that NeRF-OR allows for novel view synthesis with videos captured by a small number of cameras with entirely different vantage points, which is the typical camera setting in the OR. Code is available via: github.com/Beerend/NeRF-OR .
Read full abstract