Abstract

There is a growing number of autonomous driving datasets that can be used to benchmark vision and LiDAR based place recognition and localization methods. The same sensor modalities, vision and depth, are important for indoor localization and navigation as well, but there is a lack of large indoor datasets. This work presents a realistic indoor dataset for long-term evaluation of place recognition and localization methods. The dataset contains RGB and LiDAR sequences captured inside campus buildings over a time period of nine months and in various illumination and occupancy conditions. The dataset contains three typical indoor spaces: office, basement and foyer. We describe collection of the dataset and in the experimental part we report results for the two state-of-the-art deep learning place recognition methods. The data will be available through https://github.com/lasuomela/TAU-Indoors .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call