Abstract

The curvature of the lane output by the vision sensor caused by shadows, changes in lighting and line breaking jumps over in a period of time, which leads to serious problems for unmanned driving control. It is particularly important to predict or compensate the real lane in real-time during sensor jumps. This paper presents a lane compensation method based on multi-sensor fusion of global positioning system (GPS), inertial measurement unit (IMU) and vision sensors. In order to compensate the lane, the cubic polynomial function of the longitudinal distance is selected as the lane model. In this method, a Kalman filter is used to estimate vehicle velocity and yaw angle by GPS and IMU measurements, and a vehicle kinematics model is established to describe vehicle motion. It uses the geometric relationship between vehicle and relative lane motion at the current moment to solve the coefficient of the lane polynomial at the next moment. The simulation and vehicle test results show that the prediction information can compensate for the failure of the vision sensor, and has good real-time, robustness and accuracy.

Highlights

  • In recent years, intelligent driving vehicles have received widespread attention

  • When global positioning system (GPS) and INS are used for high-precision positioning, the absolute position information is obtained, which must be matched with a high-precision map to obtain the relative road position information

  • A lane compensation method based on sensor fusion is proposed to compensate for the short-time failure of vision sensors

Read more

Summary

Introduction

Intelligent driving vehicles have received widespread attention. The reason is that intelligent driving vehicles can play a positive role in the daily traffic environment. Important functions of intelligent driving, such as lane keeping systems (LKAs) and lane change systems (LCXs) have been widely studied. In order to ensure autonomous vehicles can drive safely on the road, high-precision lane level location is required. Lane-level location can be realized through Lidar, GPS/INS or cameras. The cost of Lidar is higher than that of other sensors. When GPS and INS are used for high-precision positioning, the absolute position information is obtained, which must be matched with a high-precision map to obtain the relative road position information. High-efficiency and low-cost environmental perception based on vision will become the main direction of future industrialization of intelligent driving vehicles [1]

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.