Abstract

Due to the negative impedance potential of constant power loads (CPLs), the stability of power electronic converters-based electrical distribution networks is prone to instability. This brief proposes a robust control technique based on deep reinforcement learning to stabilize a DC microgrid (MG) with parallel boost converters when feeding CPLs. For this purpose, a model-free sliding mode controller (MFSMC) has been applied to the DC MG. The MFSMC controller does not need any identification of converters in the DC MG system while ensuring the efficiency and the stability of the control synthesis. The key control coefficients are designed by Proximal Policy Optimization (PPO) Reinforcement Learning (RL). The PPO is made from two deep neural networks (NNs) (actor NN and critic NN) which are trained to adjust the coefficients of the MFSMC controller. The MFSMC controller designed by the PPO with actor-critic architecture is applied to the DC MG system feeding CPL in an OPAL-RT setup to verify the efficiency of the proposed controller through Hardware-in-the-Loop (HiL) simulations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call