Abstract

Due to the rising amount of volatile renewable energy sources and new electrical loads, the secure and economical operation of future electrical distribution grids faces multiple challenges. These can either be met by focusing on conventional grid reinforcements, such as the installation of more robust grid equipment, or the usage of intelligent grid control procedures. By making use of flexible devices, such as energy storages or coupling of the electrical and gas sector, the volatility of the new grid participants can be reduced. A key element in such a system is the application of intelligent algorithms that are able to calculate optimal schedules for those flexible devices, given the current and predicted state of the electrical distribution grid. Classical approaches make use of model-based optimization algorithms that rely on fully-observable and deterministic models of the system. In this paper, the application of self-learning algorithms based on reinforcement learning (RL) are applied for the task of optimally scheduling flexibilities in distribution grids. These techniques offer the advantage to be independent of fully-observable physical models and work well in stochastic environments. The suitability of applying RL for these kinds of problems is examined. Comparison with classical methods shows the advantages and limitations of these data-driven methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call