Abstract

Autonomous indoor parking is a challenging issue due to the lack of precise localization. In this paper, we proposed a method that achieves simultaneous localization and mapping using a fisheye camera, which utilizes deep learning methods to identify human-readable visual semantic landmarks in underground parking lots. These visual semantic landmarks are robust in various lighting conditions and textureless scenes comparing to low-level points and line features. Extended Kalman filter is utilized to optimally fuse visual localization information and odometry data. Experimental results show that a semantic map of visual landmarks can be established automatically and robustly. We compared the accuracy of the trajectory with the Lidar trajectory obtained by the LeGO-LOAM algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.