Abstract

Given the recent advances in computer vision, image processing and control systems, self-driving vehicles has been one of the most promising and challenging research topics nowadays. The design of vision-based robust controllers to keep an autonomous car in the center of the lane, despite uncertainties and disturbances, is still an ongoing challenge. This paper presents a hybrid control architecture that combines Deep Reinforcement Learning (DRL) and Robust Linear Quadratic Regulator (RLQR) for vision-based lateral control of an autonomous vehicle. Evolutionary estimation is used to model the vehicle uncertainties. For performance comparison, a DRL method and three other hybrid controllers are also evaluated. The inputs for each controller are real-time semantically segmented RGB camera images which serve as the basis to calculate continuous steering actions to keep the vehicle on the center of the lane with a constant velocity. Simulation results show that the proposed hybrid RLQR with evolutionary estimation of uncertainties architecture outperforms the other algorithms implemented. It presents lower tracking errors, smoother steering inputs, total collision avoidance and better generalization in new urban environments. Furthermore, it significantly decreases the required training time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.