Abstract
Reinforcement learning (RL) is an emerging and promising technique for building control, demonstrating superior performance. Due to concerns about operational security and data demands, RL controllers are typically trained using building energy simulation (BES) tools to simulate building and system responses. However, these tools often assume perfectly mixed indoor air, which limits the effectiveness of RL controllers in real-world, non-uniform indoor spaces. To address this issue, we propose a novel co-simulation framework that combines a data-driven model for fast and accurate prediction of non-uniform indoor environments with a first principle-based model to simulate building and system dynamics. This framework’s usage and performance are demonstrated through a case study on RL-based space cooling control. Our framework enables training RL controllers with data from various locations within a non-uniform environment, yielding more realistic results compared to the well-mixed air assumption, while increasing computation time by only 70%. Compared to the conventional Computational Fluid Dynamics (CFD)-BES co-simulation approach, our framework accelerates the simulation process by approximately 8000 times. This provides a highly efficient and feasible solution for advanced building control applications, showing significant potential for practical implementation.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have