Abstract

Environmental and climate change concerns are pushing the rapid development of new energy resources (DERs). The Energy Internet (EI), with the power-sharing functionality introduced by energy routers (ERs), offers an appealing alternative for DER systems. However, previous centralized control schemes for EI systems that follow a top-down architecture are unreliable for future power systems. This study proposes a distributed control scheme for bottom-up EI architecture. Second, model-based distributed control methods are not sufficiently flexible to deal with the complex uncertainties associated with multi-energy demands and DERs. A novel model-free/data-driven multiagent deep reinforcement learning (MADRL) method is proposed to learn the optimal operation strategy for the bottom-layer microgrid (MG) cluster. Unlike existing single-agent deep reinforcement learning methods that rely on homogeneous MG settings, the proposed MADRL adopts a form of decentralized execution, in which agents operate independently to meet local customized energy demands while preserving privacy. Third, an attention mechanism is added to the centralized critic, which can effectively accelerate the learning speed. Considering the bottom-layer power exchange request and the predicted electricity price, model predictive control of the upper layer determines the optimal power dispatching between the ERs and main grid. Simulations with other alternatives demonstrate the effectiveness of the proposed control scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call