Abstract
Recent advances in robotics and deep learning demonstrate promising 3-D perception performances via fusing the light detection and ranging (LiDAR) sensor and camera data, where both spatial calibration and temporal synchronization are generally required. While the LiDAR–camera calibration problem has been actively studied during the past few years, LiDAR–camera synchronization has been less studied and mainly addressed by employing a conventional pipeline consisting of clock synchronization and temporal synchronization. The conventional pipeline has certain potential limitations, which have not been sufficiently addressed and could be a bottleneck for the potential wide adoption of low-cost LiDAR–camera platforms. Different from the conventional pipeline, in this article, we propose the LiCaS3, the first deep-learning-based framework, for the LiDAR–camera synchronization task via self-supervised learning. The proposed LiCaS3 does not require hardware synchronization or extra annotations and can be deployed both online and offline. Evaluated on both the KITTI and Newer College datasets, the proposed method shows promising performances. The code will be publicly available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/KleinYuan/LiCaS3</uri> .
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.