Abstract

In the intelligent network traffic scenario, the convenient acquisition of microscopic vehicle states and global traffic states can help solve the problems of vehicle driving and energy management in complex traffic environments. This paper proposes a new energy management method for the hydrogen fuel cell bus based on the double-layer deep deterministic policy gradient (DDPG). Combined with the SUMO simulation platform, a double-layer deep reinforcement learning (D-DRL) architecture based on DDPG is designed to improve control accuracy and training speed. In the upper D-DRL, the Agent handles the effects of complex traffic environments to control the reasonable speed of the vehicle and keep it running smoothly to reduce the loss of energy caused by speed changes, compared with the SUMO-IDM model, the maximum-minimum velocity difference was reduced by 21 % and the acceleration and acceleration change was reduced by 7.9 % and 19 %, After the Agent receives the output speed from the upper layer, it distributes the power between fuel cell and power cell. Compared with the DP algorithm, it keeps the SOC at a higher level, the hydrogen consumption level reaches 93.25 %, and the fluctuation amplitude decreases by 42.09 %, effectively improving fuel cell durability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call