Fast charge/discharge scheduling of battery storage systems is essential in microgrids to effectively balance variable renewable energy sources, meet fluctuating demand, and maintain grid stability. To achieve this, parallel processing is employed, allowing batteries to respond instantly to dynamic conditions. By managing the complexity, high data volume, and rapid decision-making requirements in real time, parallel processing ensures that the microgrid operates with stability, efficiency, and safety. With the application of deep reinforcement learning (DRL) in scheduling algorithm design, the demand for computational power has further increased significantly. To address this challenge, we propose a Ray-based parallel framework to accelerate the development of fast charge/discharge scheduling for battery storage systems in microgrids. We demonstrate how to implement a real-world scheduling problem in the framework. We focused on minimizing power losses and reducing the ramping rate of net loads by leveraging the Asynchronous Advantage Actor Critic (A3C) algorithms and the features of the Ray cluster for real-time decision making. Multiple instances of OpenDSS were executed concurrently, with each instance simulating a distinct environment and efficiently processing input data. Additionally, Numba CUDA was utilized to facilitate GPU acceleration of shared memory, significantly enhancing the performance of the computationally intensive reward function in A3C. The proposed framework enhanced scheduling performance, enabling efficient energy management in complex, dynamic microgrid environments.
Read full abstract