AbstractIn recent years, multi-agent deep reinforcement learning (MADRL) has made significant strides in power system decision-making and control. However, there is a scarcity of high-fidelity, real-time platforms for testing various DRL control algorithms in detailed power systems. Motivated by EtherCAT communication and DRL features, this study presents a MADRL online testing platform for distributed real-time dynamic control of power systems. The platform utilizes the Opal-RT real-time simulator for real-time simulation of dynamic power system environments and uses multiple AI workstations for the implementation of MADRL control algorithms. The proposed platform facilitates real-time interaction among AI workstations and the Opal-RT real-time simulator by leveraging the EtherCAT communication protocol to transmit system information and control signals. It enables the online and real-time training of distributed MADRL algorithms for power system dynamic control. The effectiveness and advantages of the proposed platform have been validated through detailed case studies of testing distributed MADRL algorithms for classical power system control problems.
Read full abstract