Abstract

Sensor calibration is the basic task for the research field of robotics and indoor navigation using heterogeneous data, such as visual images and point clouds collected from Light Detection and Ranging (LiDAR). In this paper, we take advantage of our indoor navigation robotic platform to develop novel sensor fusion algorithms with different levels of features. The recording platform is equipped with a 64-E Velodyne LiDAR and a monocular camera. The dataset, namely L2VCali, is composed of raw point clouds, images, and ground truth (GT) for both intrinsic and extrinsic calibration for each epoch. The data is collected in different types of indoor environments, for example, open areas, Manhattan-world rooms, and hallways. Results from state-of-art algorithms reveal that published algorithms can obtain high accuracy when the indoor environments become complex and have repetitive features. The dataset aims at becoming a benchmark for evaluating the robustness of calibration algorithms by providing both typical and challenging scenarios.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.