Abstract

At present, Ship energy scheduling based on deep reinforcement learning (DRL) is an important research direction in which Deep Q learning algorithm (DQN) has been successfully applied in the field of energy scheduling. In terms of the shortcomings of DQN in all electric ships(AES) energy scheduling, such that, the insufficient performance of using multiplayer perceptron(MLP) as action network and the degradation of convergence performance due to over estimation of Q value, in this paper, we propose a DQN cross entropy(DQN-CE) energy scheduling algorithm combing bi-directional LSTM and attention mechanism(BI-LSTM-Att). Instead of using MLP, firstly, we use BI-LSTM-Att as action network of DQN, and then an improved DRL algorithm called DQN-CE is proposed, that is, we add cross-entropy loss of predicted action of the next time for the target network and the action network to DQN. Finally, we conduct simulation experiments on an all-electric ferry, and the experimental results show that the energy scheduling algorithm proposed in this paper reduces the economic consumption by 4.11%, increases the utilization rate of energy storage system by 24.4%, and reduces the exploration time of agent training by 31.3% compared with the original DQN energy scheduling algorithm. Finally, the effectiveness and superiority of the algorithm were further verified in the new case study.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call