Abstract

The propagation of distributed renewable energy resources poses several challenges in the operation of microgrids due to uncertainty. In traditional energy scheduling approaches, the scheduling algorithm often depends on accurate forecasts of the uncertainties, which in many cases add complexities to the problem. While several data-driven algorithms have been demonstrated to overcome the aforementioned challenges, these methods work better only if the action space is finite. However, the action space for most of the real world problems is continuous making these methods unsuitable for practical applications. To address these issues, this paper proposes a purely data-driven and policy-based approach called Proximal Policy Optimization (PPO) that adopts Reinforcement Learning (RL) to determine the optimal schedule of the energy mix in a hybrid renewable energy system (HRES), with a case study on the highly energy-intensive chlor-alkali process. We implement PPO, an Advantage Actor-Critic RL method that enables samples to be trained for multiple epochs of mini-batch updates to effectively distribute energy among the HRES without having access to the dynamics of the system. To combat the increased grid dependency of a highly energy-intensive process and achieve profitability, a grid-connected HRES network consisting of solar photovoltaic panels, wind turbines, and fuel cells is considered to deliver power to the chlor-alkali process. Simulation results demonstrate an optimal power dispatch among the available power sources with an overall economic cost saving of around 32.8% and corresponding carbon emission reduction of around 28.5% by adopting a HRES as compared to only grid connected system. This framework can generally be used in any data-driven scheduling problems taking advantages of its model-free property.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call