Abstract

This paper presents an approximate Reinforcement Learning (RL) methodology for bi-level power management of networked Microgrids (MG) in electric distribution systems. In practice, the cooperative agent can have limited or no knowledge of the MG asset behavior and detailed models behind the Point of Common Coupling (PCC). This makes the distribution systems unobservable and impedes conventional optimization solutions for the constrained MG power management problem. To tackle this challenge, we have proposed a bi-level RL framework in a price-based environment. At the higher level, a cooperative agent performs function approximation to predict the behavior of entities under incomplete information of MG parametric models; while at the lower level, each MG provides power-flow-constrained optimal response to price signals. The function approximation scheme is then used within an adaptive RL framework to optimize the price signal as the system load and solar generation change over time. Numerical experiments have verified that, compared to previous works in the literature, the proposed privacy-preserving learning model has better adaptability and enhanced computational speed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call