Abstract

Model-free control approaches, such as Reinforcement Learning (RL), can be trained using historical data and therefore have the advantage of low cost and scalability. However, RL does not provide efficient, coordinated control for regional buildings, which leads to inter-building energy coupling, and thus resulting in higher energy consumption. This paper presents energy optimization strategies based on Distributed Reinforcement Learning (DRL) to reduce energy consumption in regional buildings while maintaining human comfort. The proposed strategy's system learns to regulate procedures to reduce building energy consumption through parameter sharing and coordination optimization. The energy optimization strategies are validated in this research by utilizing nine campus buildings as a case analysis. The results show that the system achieves the lowest total energy consumption with the employed strategies against the Rule-Based Control (RBC), Soft Actor-Critic (SAC) strategy, Model Predictive Control (MPC) and Non-dominated Sorting Genetic Algorithm Ⅱ (NSGA-Ⅱ). Furthermore, the proposed energy optimization strategies demonstrated good accuracy and robustness with a comprehensive evaluation of multi-building energy consumption in error analysis, load factor, power demand, and net power consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call