Abstract In low-light environments, the scarcity of visual information makes feature extraction and matching challenging for traditional visual simultaneous localization and mapping (SLAM) systems. Changes in ambient lighting can also reduce the accuracy and recall of loop closure detection. Most existing image enhancement methods tend to introduce noise, artifacts, and color distortions when enhancing images. To address these issues, we propose an innovative low-light visual-inertial (LL-VI) SLAM system, named LL-VI SLAM, which integrates an image enhancement network into the front end of the SLAM system. This system consists of a learning-based low-light enhancement network and an improved visual-inertial odometry. Our low-light enhancement network, composed of a Retinex-based enhancer and a U-Net-based denoiser, enhances image brightness while mitigating the adverse effects of noise and artifacts. Additionally, we incorporate a robust Inertial Measurement Unit initialization process at the front end of the system to accurately estimate gyroscope biases and improve rotational estimation accuracy. Experimental results demonstrate that LL-VI SLAM outperforms existing methods on three datasets, namely LOLv1, ETH3D, and TUM VI, as well as in real-world scenarios. Our approach achieves a peak signal-to-noise ratio of 22.08 dB. Moreover, on the TUM VI dataset, our system reduces localization error by 22.05% compared to ORB-SLAM3, proving the accuracy and robustness of the proposed method in low-light environments.
Read full abstract