Abstract

Ancillary services rely on operating reserves to support an uninterrupted electricity supply that meets demand. One of the hidden reserves of the grid is in thermostatically controlled loads. To efficiently exploit these reserves, a new realization of control of voltage in the allowable range to follow the set power reference is proposed. The proposed approach is based on the deep reinforcement learning (RL) algorithm. Double DQN is utilized because of the proven state-of-the-art level of performance in complex control tasks, native handling of continuous environment state variables, and model-free application of the trained DDQN to the real grid. To evaluate the deep RL control performance, the proposed method was compared with a classic proportional control of the voltage change according to the power reference setup. The solution was validated in setups with a different number of thermostatically controlled loads (TCLs) in a feeder to show its generalization capabilities. In this article, the particularities of deep reinforcement learning application in the power system domain are discussed along with the results achieved by such an RL-powered demand response solution. The tuning of hyperparameters for the RL algorithm was performed to achieve the best performance of the double deep Q-network (DDQN) algorithm. In particular, the influence of a learning rate, a target network update step, network hidden layer size, batch size, and replay buffer size were assessed. The achieved performance is roughly two times better than the competing approach of optimal control selection within the considered time interval of the simulation. The decrease in deviation of the actual power consumption from the reference power profile is demonstrated. The benefit in costs is estimated for the presented voltage control-based ancillary service to show the potential impact.

Highlights

  • In the power system analysis and the power grid development, the applications of classic analytical approaches were successful in numerous control solutions

  • New advanced control solutions in power systems are required to cope with an uncertainty that is contributed by loads and renewable energy sources to the modern power grid [1]

  • These results allow for the continuation of research towards the application of reinforcement learning (RL) to optimize the control of thermostatically controlled loads (TCLs) using other considerations of the stochastic behavior of the loads, and means of control, such as voltage signal change in the allowable range

Read more

Summary

Introduction

In the power system analysis and the power grid development, the applications of classic analytical approaches were successful in numerous control solutions. It was shown that power consumption modulation according to a reference power profile can be achieved when control over a heterogeneous set of TCLs is considered These results allow for the continuation of research towards the application of RL to optimize the control of TCLs using other considerations of the stochastic behavior of the loads, and means of control, such as voltage signal change in the allowable range. The previously mentioned paper focused on the application of a classic reinforcement learning algorithm—Q-learning—to find an optimal control for voltage change-based ancillary service; even though the Q-learning method showed good performance, in this paper, the authors expand their research towards the application of deep RL learning that was proven to be superior to other control approaches in seminal applications, including demand response. The deep learning RL algorithm is utilized in the presented research

Contribution
Problem Formulation
Paper Organization
Materials and Methods
The Proposed Voltage Control-Based Service Using Deep Reinforcement Learning
Bellman Equations for Q-Learning Algorithm
Double Deep Q-Network
Result
Experiment Pipeline
Results and Discussion
Method
Evaluation of the Expected Decrease in Costs
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.