Abstract

In this paper, an improved Reinforcement Learning (RL) based approach for adaptive Virtual Synchronous Generator (VSG) control of grid-forming inverters is proposed. High penetration of inverter-based resources causes recurrent frequency stabilization issues. For this matter, a novel model-free Actor–Critic policy optimization technique is proposed to improve frequency transients metrics such as frequency nadir and Rate of Change of Frequency (RoCoF). Stability analysis of the VSG controlled system is conducted through small signal modeling to determine the stability margins of both the virtual inertia parameter J and damping coefficient Dp. The control problem is formulated in RL terms using a new technique for state, action and reward construction that improves the learning process of the agents. The proposed approach’s efficiency is tested for three RL algorithms, namely, Deep Deterministic Policy Gradient (DDPG), Soft Actor–Critic (SAC), and Twin-Delayed DDPG (TD3) with special attention given to the latter. Various performance metrics such as training time and maximum cumulative episodic reward are used to compare the three algorithms. The resulting agent based VSG adaptive controllers are simulated and then compared for power quality and frequency stabilization capabilities. Finally, a comparison between non-adaptive VSG and TD3 based VSG is presented to conclude on the efficiency improvements achieved by the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call