Abstract

In this paper, a Reinforcement Learning (RL)-based approach to optimally dispatch PV inverters in unbalanced distribution systems is presented. The proposed approach exploits a decentralized architecture in which PV inverters are operated by agents that perform all computational processes locally; while communicating with a central agent to guarantee voltage magnitude regulation within the distribution system. The dispatch problem of PV inverters is modeled as a Markov Decision Process (MDP), enabling the use of RL algorithms. A rolling horizon strategy is used to avoid the computational burden usually associated with continuous state and action spaces, coupled with a computationally efficient learning algorithm to model the action-value function for each PV inverter. The effectiveness of the proposed decentralized RL approach is compared with the optimal solution provided by a centralized nonlinear programming (NLP) formulation. Results showed that within several executions, the proposed approach converges either to the optimal solution or to solutions with a PV curtailment excess of less than 2.5% while still enforcing voltage magnitude regulation.

Highlights

  • According to the International Energy Agency, for the year 2020, a total addition of 107 GW to the global solar PV capacity was reached [1]

  • The total wall-clock of the proposed Reinforcement Learning (RL) approach is much higher than the time required to solve the centralized model. This is due to the fact that if all the information is available to the centralized Distribution System (DS) agent, the computational time will depend on the size of the distribution network, whereas the proposed decentralized approach the total computational time depends on the number of PV Agents involved

  • To avoid the computational burden usually associated with Markov Decision Processes (MDPs) with continuous state and action spaces, a rolling horizon strategy was used, together with a computationally efficient learning algorithm used to model the actionvalue function

Read more

Summary

Introduction

According to the International Energy Agency, for the year 2020, a total addition of 107 GW to the global solar PV capacity was reached [1]. In these droop-based control strategies, the PV inverters regulate their active and reactive power injection as a function of their voltage magnitude at the point of connection with the distribution system [7] Despite their effectiveness to solve overvoltage issues, as curtailment decisions are made based only on local information, a larger amount of active power will be curtailed, especially when compared with coordinated strategies that consider the whole distribution network’s operation. Optimality can be guarantee through convexification procedures, these centralized approaches show poor scalability features To overcome this issue, works such as [12,14] have developed distributed strategies in which all the information required to perform coordination is shared either with a centralized operator or between PV inverters closely located. The proposed approach exploits a decentralized architecture in which PV inverters are operated by agents that perform all computational processes locally; while communicating with a central agent to guarantee voltage magnitude regulation within the distribution system. Regarding the dispatch problem of PV inverters, in [18], a centralized deep RL algorithm is implemented

Optimal dispatch of PV inverters
Markov Decision Process and Reinforcement Learning
Reinforcement learning and action-value function approximation
PV inverters dispatch problem as an MDP
State space
Action space
Reward function
Transition model
Action-value function approximation
Overview of the proposed RL approach
Simulation setup
Validation and comparison
Computational time assessment
Full-time horizon operation
Considering PV inverters’ reactive power absorption
Scalability assessment
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call