Simultaneous localization and mapping (SLAM) is fundamental for intelligent mobile units to perform diverse tasks. Recent work has shown that integrating neural rendering and SLAM showed promising results in photorealistic environment reconstruction. However, existing methods estimate pose by minimizing the error between rendered and input images, which is time-consuming and cannot be run in real-time, deviating from the original intention of SLAM. In this paper, we propose a dense RGB-D SLAM based on 3D Gaussian splatting (3DGS) while employing generalized iterative closest point (G-ICP) for pose estimation. We actively utilize 3D point cloud information to improve the tracking accuracy and operating speed of the system. At the same time, we propose a dual keyframe selection strategy and its corresponding densification method, which can effectively reconstruct new observation scenes and improve the quality of previously constructed maps. In addition, we introduce regularization loss to address the scale explosion of the 3D Gaussians and over-elongate in the camera viewing direction. Experiments on the Replica, TUM-RGBD, and ScanNet datasets show that our method achieves state-of-the-art tracking accuracy and runtime while being competitive in rendering quality.
Read full abstract