Abstract

The transportation system of those countries has a huge traffic flow is bearing great pressure on transportation planning and management. Vehicle path planning is one of the effective ways to alleviate such pressure. Deep reinforcement learning (DRL), as a state‐of‐the‐art solution method in vehicle path planning, can better balance the ability and complexity of the algorithm to reflect the real situation. However, DRL has its own disadvantages of higher search cost and earlier convergence to the local optimum, as vehicle path planning issues are usually in a complex environment, and their action set can be diverse. In this paper, a mixed policy gradient actor‐critic (AC) model with random escape term and filter operation is proposed, in which the policy weight is both data driven and model driven. The empirical data‐driven method is used to improve the poor asymptotic performance, and the model‐driven method is used to ensure the convergence speed of the whole model. At the same time, in order to avoid the model converging local optimum, a random escape term has been added in the policy weight update process to overcome the problem that it is difficult to optimize the non‐convex loss function, and the random escape term can help to explore the policy in more directions. In addition, filter optimization has been innovatively introduced in this paper, and the step size of each iteration of the model is selected through the filter optimization algorithm to achieve the better iterative effect. Numerical experiment results have shown that the model proposed in this paper can not only improve the accuracy of the solution without losing the accuracy but also speed up the convergence speed and improve the utilization of data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.