Abstract
Lane-level localization is critical for autonomous vehicles (AVs). However, complex urban scenarios, particularly tunnels, pose significant challenges to AVs’ localization systems. In this paper, we propose a fusion localization method that integrates multiple mass-production sensors, including Global Navigation Satellite Systems (GNSSs), Inertial Measurement Units (IMUs), cameras, and high-definition (HD) maps. Firstly, we use a novel electronic horizon module to assess GNSS integrity and concurrently load the HD map data surrounding the AVs. This map data are then transformed into a visual space to match the corresponding lane lines captured by the on-board camera using an improved BiSeNet. Consequently, the matched HD map data are used to correct our localization algorithm, which is driven by an extended Kalman filter that integrates multiple sources of information, encompassing a GNSS, IMU, speedometer, camera, and HD maps. Our system is designed with redundancy to handle challenging city tunnel scenarios. To evaluate the proposed system, real-world experiments were conducted on a 36-kilometer city route that includes nine consecutive tunnels, totaling near 13 km and accounting for 35% of the entire route. The experimental results reveal that 99% of lateral localization errors are less than 0.29 m, and 90% of longitudinal localization errors are less than 3.25 m, ensuring reliable lane-level localization for AVs in challenging urban tunnel scenarios.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.