Abstract

This paper addresses the voltage control problem in medium-voltage distribution networks. The objective is to cost-efficiently maintain the voltage profile within a safe range, in presence of uncertainties in both the future working conditions, as well as the physical parameters of the system. Indeed, the voltage profile depends not only on the fluctuating renewable-based power generation and load demand, but also on the physical parameters of the system components. In reality, the characteristics of loads, lines and transformers are subject to complex and dynamic dependencies, which are difficult to model. In such a context, the quality of the control strategy depends on the accuracy of the power flow representation, which requires to capture the non-linear behavior of the power network. Relying on the detailed analytical models (which are still subject to uncertainties) introduces a high computational power that does not comply with the real-time constraint of the voltage control task. To address this issue, while avoiding arbitrary modeling approximations, we leverage a deep reinforcement learning model to ensure an autonomous grid operational control. Outcomes show that the proposed model-free approach offers a promising alternative to find a compromise between calculation time, conservativeness and economic performance.

Highlights

  • The massive integration of Distributed Generation (DG) units in electric distribution networks poses significant challenges for system operators [1,2,3,4,5]

  • The main contribution of this paper is to propose a self-learning voltage control tool based on deep reinforcement learning (DRL), which accounts for the limited knowledge on both the network parameters and the future working conditions

  • To solve the voltage control problem, the deep deterministic policy gradient (DDPG) algorithm is implemented in Python using

Read more

Summary

Introduction

The massive integration of Distributed Generation (DG) units in electric distribution networks poses significant challenges for system operators [1,2,3,4,5]. Multi-agent frameworks have been developed in [32,33] to enable decentralized executions of the control procedure that do not require a central controller All these methods are disregarding the endogenous uncertainties on network parameters, which may mislead the DSO into believing that the control strategy satisfies technical constraints, while it may result into unsafe conditions. In this context, the main contribution of this paper is to propose a self-learning voltage control tool based on deep reinforcement learning (DRL), which accounts for the limited knowledge on both the network parameters and the future (very-short-term) working conditions.

Markov Decision Process
State Space
Action Space
Reward
Reinforcement Learning Algorithm
Simulation Environment
Exogenous Uncertainties on the Network Operating Point
Endogenous Uncertainties on the Network Component Models and Parameters
Case Study
Impact of Ddpg Parameters
Impact of Endogenous Uncertainties
Extreme Cases
Findings
Conclusions and Perspectives
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call