This work proposed a LiDAR-inertial-visual fusion framework termed R 3LIVE++ to achieve robust and accurate state estimation while simultaneously reconstructing the radiance map on the fly. R 3LIVE++ consists of a LiDAR-inertial odometry (LIO) and a visual-inertial odometry (VIO), both running in real-time. The LIO subsystem utilizes the measurements from a LiDAR for reconstructing the geometric structure, while the VIO subsystem simultaneously recovers the radiance information of the geometric structure from the input images. R 3LIVE++ is developed based on R 3LIVE and further improves the accuracy in localization and mapping by accounting for the camera photometric calibration and the online estimation of camera exposure time. We conduct more extensive experiments on public and self-collected datasets to compare our proposed system against other state-of-the-art SLAM systems. Quantitative and qualitative results show that R 3LIVE++ has significant improvements over others in both accuracy and robustness. Moreover, to demonstrate the extendability of R 3LIVE++, we developed several applications based on our reconstructed maps, such as high dynamic range (HDR) imaging, virtual environment exploration, and 3D video gaming.
Read full abstract