Abstract

The hereby study combines a reinforcement learning machine and a myopic optimization model to improve the real-time energy decisions in microgrids with renewable sources and energy storage devices. The reinforcement learning-based agent is built as an actor-critic agent making the aggregated near-optimal charging/discharging energy decisions of the microgrid energy storage devices from a discrete action space relying on a reward related to the microgrid online optimal objective function value. The next step time energy levels of storage devices are then computed and provided to the myopic optimization-based decision-making model as parameters which optimally find the incurred power flow within the microgrid minimizing the real-time microgrid energy cost. The real-time measurement of stochastic parameters of the microgrid coupled with the current energy levels of electrical and heat storage are input to the artificially intelligent machine as observations states. The actor-critic agent approximators are modeled as deep neural networks optimized using the Adam gradient descent algorithm with a gradient threshold. Although the proposed model with a 2-kWh increment of the charging/discharging energy training is time-consuming, it has been able at 100% to optimally make microgrid energy decisions and improve online energy decisions by 90.98% compared to the myopic model alone.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call