The building sector consumes substantial energy, with HVAC systems accounting for nearly half of the total energy use. Optimizing chiller unit operations is crucial for reducing energy consumption. This research introduces a novel model-free control method for optimizing central chiller load distribution using deep reinforcement learning (DRL). The aim is to address the challenges posed by the high-dimensional complexity and parameter tuning in traditional meta-heuristic algorithms. The proposed method leverages deep neural networks combined with reinforcement learning to optimize chiller control based on real-world data. Case studies conducted in a large airport terminal demonstrate that this approach achieves a 7.71 % energy savings, closely matching the performance of model-based control methods with only a 0.26 % difference. The results indicate that the DRL method not only outperforms traditional expert control systems but also offers a feasible alternative for optimal control in complex, variable environments with limited data. This study contributes to the field by filling a research gap in applying DRL for optimal chiller load (OCL) control and demonstrates its potential for large-scale public building applications. The novelty of this research lies in its model-free approach, which provides a robust solution for energy optimization in dynamic and uncertain environments.
Read full abstract