Abstract

The deep reinforcement learning (DRL) technique has gained attention for its potential in designing “virtual network” controllers. This skill utilizes a novel solution that can avoid the specific parameters and system model required in classical dynamic programming algorithms. However, addressing the issue of system uncertainties and performance deterioration remains a challenge. To overcome this challenge, the authors propose a new control prototype using a twin delayed deep deterministic policy gradient (TD3)-based adaptive controller, which replaces the conventional virtual synchronous generator (VSG) module in the modular multilevel converter (MMC) control. In this approach, an adaptive programming module is developed using a critic fuzzy network point of view to determine the optimal control policy. The modification presented in this framework is able to improve the system stability and resist disruptions while retaining the merits of the conventional VSG control model. The proposed approach is implemented and tested using the DRL toolbox in MATLAB/Simulink.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call