Abstract

This paper presents PAL-SLAM2, a visual and visual–inertial monocular simultaneous localization and mapping (SLAM) system for a panoramic annular lens (PAL) with an ultra-hemispherical field of view (FoV), overcoming the limitations of traditional frameworks in handling fast turns, nighttime conditions and rapid lighting changes. The system incorporates modules for initialization, tracking, local mapping, loop and map merging. To fully exploit information from the negative space (z<0), keypoints are projected onto the unit sphere for visual initialization and tightly-coupled visual–inertial optimization. Leveraging a multimap strategy, the feature-based PAL-SLAM2 is capable of detecting unidirectional and bidirectional public areas using PAL images, thereby enhancing the performance of loop correction and map merging. Testing indicates that it achieves an average accuracy of 7.1 cm on the PALVIO indoor dataset, surpassing similar state-of-the-art frameworks. Furthermore, we have collected a large-scale dataset comprising over 120,000 images with corresponding inertial measurement unit (IMU) data, demonstrating PAL-SLAM2’s robustness under challenging outdoor conditions. Relying solely on visual and inertial inputs, our system is particularly suited for environments where Global Navigation Satellite System (GNSS) signals are frequently obstructed, such as indoor spaces or dense urban areas. The dataset can be accessed at: https://github.com/wwendy233/Plaza-Dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call