Abstract

Multi-agent deep reinforcement learning (MA-DRL) method provides a groundbreaking approach to tackling computational problems in power systems, particularly for distributed energy resources that have been widely adopted to advance energy sustainability. This paper presents a novel optimal energy management based on proposed MA-DRL method. This method employs deep neural network to learn strategy based on stacked-denoising auto-encoders and multi-agent deep deterministic policy gradient learning capability. The MA-DRL method is adopted to find the optimal strategy of the optimal energy management problem under the Markov decision process framework. This method aims to coordinate multiple energies and achieve optimal operation over a variety of hourly dispatches while taking into account the distinct properties of electric and thermal energies. The primary challenge of the planning and operation of multiple energy carrier microgrids (MECMs) is determining the optimal interaction between renewable energy resources, energy storage systems, power-to-thermal conversion systems, and upstream power grid in order to improve overall energy utilization efficiency. The presented robust method can adaptively derive the optimal operation for MECMs through centralized learning and decentralized implementation. The optimization problem is employed in this study to concurrently reduce the total emissions and the operating costs while considering engineering design constraints. Finally, to demonstrate the efficiency of the proposed method, it is verified on an integrated modified IEEE 33-bus and 8-node gas systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call