Abstract

The increasingly complex energy systems are turning the attention towards model-free control approaches such as reinforcement learning (RL). This work proposes novel RL-based energy management approaches for scheduling the operation of controllable devices within an electric network. The proposed approaches provide a tool for efficiently solving multi-dimensional, multi-objective and partially observable power system problems. The novelty in this work is threefold: We implement a hierarchical RL-based control strategy to solve a typical energy scheduling problem. Second, multi-agent reinforcement learning (MARL) is put forward to efficiently coordinate different units with no communication burden. Third, a control strategy that merges hierarchical RL and MARL theory is proposed for a robust control framework that can handle complex power system problems. A comparative performance evaluation of various RL-based and model-based control approaches is also presented. Experimental results of three typical energy dispatch scenarios show the effectiveness of the proposed control framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call