Abstract

This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites.

Highlights

  • Autonomous vehicles rely on navigation sensors such as Global Navigation Satellite System (GNSS) receivers, inertial navigation systems (INS), odometers, LiDAR, radar, etc

  • A NovAtel SPAN LCIsystem that includes a GNSS receiver and an LCI inertial navigation system was used as the reference system

  • This is referred to as Tightly-Coupled (TC) GNSS/vision; The loosely-coupled GNSS/vision solution integrates measurements from the vision system with the GNSS least squares PVTsolution obtained by using range and range rate observations. Both systems independently compute the navigation solutions, and they are integrated in a loosely-coupled way. This means that if one of the system is unable to provide the solution (e.g., GNSS), no update from that system is provided to the integration filter

Read more

Summary

Introduction

Autonomous vehicles rely on navigation sensors such as GNSS receivers, inertial navigation systems (INS), odometers, LiDAR, radar, etc None of these sensors alone is able to provide satisfactory position solutions in terms of accuracy, availability, continuity and reliability all the time and in all environments. Their performance degrades quickly when updates from other systems such as GNSS are not available. Integrating INS and GNSS might not be enough to ensure the availability of the position solutions with a certain accuracy and reliability. The Fisher classifier takes about 1.03 s to process one image This is computationally heavy compared to the method proposed in [14], where it was found that the Otsu method outperforms the other considered algorithms (Meanshift, HMRF-EM, graph-cut) for this specific upward-pointing camera application

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.