Abstract

Renewable -based microgrid (MG) is recognized as an eco-friendly solution in the development of renewable energy (RE). Moreover, the MG energy management with high RE penetration faces complicate uncertainties due to the inaccuracy of predictions. Besides, the growing participation of electric vehicles (EVs) makes the traditional model-based methods even more infeasible. Considering uncertainties associated with RE, EVs, and electricity price, a model-free deep reinforcement learning (DRL), namely twin delayed deep deterministic policy gradient (TD3), is employed to develop an optimized control strategy to minimize operating costs and satisfy charging expectations. The energy management problem is first formulated as a Markov decision process. Then, TD3 solely relies on the limited observation to find the optimal continuous control strategy. The proposed method flexibly adjusts the operating and charging strategies of components according to RE output and electricity price. Its real-time optimized performance on three consecutive days along with electricity price is evaluated, indicating its practical potential for future application. Additionally, comparison results demonstrate that the proposed method reduces the total costs up to 15.27% and 4.24%, respectively, compared to traditional optimization and other DRL methods, which illustrates the superiority of the TD3 method on optimizing total costs of the considered MG.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call