Abstract
The building sector consumes substantial energy, with HVAC systems accounting for nearly half of the total energy use. Optimizing chiller unit operations is crucial for reducing energy consumption. This research introduces a novel model-free control method for optimizing central chiller load distribution using deep reinforcement learning (DRL). The aim is to address the challenges posed by the high-dimensional complexity and parameter tuning in traditional meta-heuristic algorithms. The proposed method leverages deep neural networks combined with reinforcement learning to optimize chiller control based on real-world data. Case studies conducted in a large airport terminal demonstrate that this approach achieves a 7.71 % energy savings, closely matching the performance of model-based control methods with only a 0.26 % difference. The results indicate that the DRL method not only outperforms traditional expert control systems but also offers a feasible alternative for optimal control in complex, variable environments with limited data. This study contributes to the field by filling a research gap in applying DRL for optimal chiller load (OCL) control and demonstrates its potential for large-scale public building applications. The novelty of this research lies in its model-free approach, which provides a robust solution for energy optimization in dynamic and uncertain environments.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.