Abstract
ABSTRACT Despite the high-precision performance of GNSS real-time kinematic (RTK) in many cases, harsh signal environments still lead to ambiguity-fixed failure and worse positioning results in kinematic localization. Intelligent vehicles are equipped with cameras for perception. Visual measurements can add new information to satellite measurements, thus improving integer ambiguity resolution (AR). Given road lane lines are stationary and their accurate positions can be previously acquired, we encode the lane lines with rectangles and integrate them into a commonly used map format. Considering the ambiguous and repetitive land lines, a map-based ambiguous lane matching method is proposed to find all possible rectangles where a vehicle may locate. And a vision-based relative positioning is then applied by measuring the relative position between the lane line corner and the vehicle. Finally, the two results are introduced into RTK single-epoch AR to find the most accurate ambiguity estimations. To extensively evaluate our method, we compare it with a tightly integrated system of GNSS/INS (GINS) and a well-known tightly coupled GNSS-Visual-Inertial fusion system (GVINS) in simulated urban environments and a real dense urban environment. Experimental results prove the superiority of our method over GINS and GVINS in success rates, fixed rates and pose accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.