Abstract
With dynamic renewable energy generation and power demand, microgrids (MGs) exchange energy with each other to reduce their dependence on power plants. In this article, we present a reinforcement learning (RL)-based MG energy trading scheme to choose the electric energy trading policy according to the predicted future renewable energy generation, the estimated future power demand, and the MG battery level. This scheme designs a deep RL-based energy trading algorithm to address the supply–demand mismatch problem for a smart grid with a large number of MGs without relying on the renewable energy generation and power demand models of other MGs. A performance bound on the MG utility and dependence on the power plant is provided. Simulation results based on a smart grid with three MGs using wind speed data from Hong Kong Observation and electricity prices from ISO New England show that this scheme significantly reduces the average power plant schedule and thus increases the MG utility in comparison with a benchmark methodology.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.