Abstract
In the intelligent network traffic scenario, the convenient acquisition of microscopic vehicle states and global traffic states can help solve the problems of vehicle driving and energy management in complex traffic environments. This paper proposes a new energy management method for the hydrogen fuel cell bus based on the double-layer deep deterministic policy gradient (DDPG). Combined with the SUMO simulation platform, a double-layer deep reinforcement learning (D-DRL) architecture based on DDPG is designed to improve control accuracy and training speed. In the upper D-DRL, the Agent handles the effects of complex traffic environments to control the reasonable speed of the vehicle and keep it running smoothly to reduce the loss of energy caused by speed changes, compared with the SUMO-IDM model, the maximum-minimum velocity difference was reduced by 21 % and the acceleration and acceleration change was reduced by 7.9 % and 19 %, After the Agent receives the output speed from the upper layer, it distributes the power between fuel cell and power cell. Compared with the DP algorithm, it keeps the SOC at a higher level, the hydrogen consumption level reaches 93.25 %, and the fluctuation amplitude decreases by 42.09 %, effectively improving fuel cell durability.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.