Abstract

The development of hydrogen-based vehicles (HVs) can help achieve a zero-carbon future; however, the availability of hydrogen refueling stations (HRSs) prevents their widespread adoption as personal vehicles. Existing studies have investigated the energy management of HRSs using various methods. However, there have been no reports of implementing a deep reinforcement learning (DRL) approach to address these uncertainties and achieve real-time decision making. This study proposes an energy management optimization model of an on-grid HRS based on the improved dueling double deep Q network(D3QN) algorithm with NoisyNet. The primary goal is to reduce the cost of operating an HRS and improve voltage stability while satisfying the hydrogen demand of HVs. Notably, this study adopts an improved version of the double deep Q network (DDQN), that is, the NoisyNet-D3QN (NN-D3QN) approach, because NoisyNet can aid efficient exploration and the dueling network can generalize learning across actions. The adopted NN-D3QN algorithm has better performance than other basic algorithms. Compared with the NN-DDQN, D3QN, and DDQN approaches, the reward of the proposed method increases by 19.08 %, 31.66 %, and 39.26 %, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call