Abstract

In obstacle avoidance trajectory planning, the environmental information collected by onboard and roadside sensors must be transmitted to the intelligent vehicle controller through network communication such as the CAN network and DSRC. However, the inherent network communication constraints such as delay and loss will lead to obstacle avoidance errors. To this end, a game deep Q-learning (GDQN) obstacle avoidance strategy is proposed combining deep Q-learning and the game theory reward strategy. The deep Q-learning network realises the modelling and description of the uncertainty of communication constraints. The obstacle avoidance reward strategy is presented by integrating the rules of traffic environment and vehicle dynamics. A scene preprocessing algorithm based on the artificial potential field method is proposed, which transforms the search problem of the optimal obstacle avoidance trajectory in the global scene into the search in the banded area to reduce the demand for computing power to the greatest extent. The experimental results show that compared with the existing research, the proposed method effectively solves the obstacle avoidance trajectory planning problem when the network has communication constraints and effectively balances traffic safety and vehicle stability in the process of obstacle avoidance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.