Abstract

With increasing complexity of building energy systems and rising shares of renewable energies in the grids, the requirements for building automation and control systems (BACS) are increasing. The use of storage systems enables the decoupling of energy demand and supply and to consider dynamic constraints in the control of the systems. The resulting optimization problem is very challenging to solve with the state-of-the-art rule-based-control (RBC) approach. Model Predictive Control (MPC) on the other hand allows a nearly optimal operation but comes with expensive modeling efforts and high computational costs. These drawbacks are contrasted by promising results from the field of Reinforcement Learning (RL). RL can be model-free, is highly adaptive and learns a policy by interacting with the controlled system. However, the literature also addresses a number of questions, to be answered before RL for BACS can be realized. One is the slow convergence of the training process, which makes the application of a pre-training strategy necessary. Therefore, we design and compare different pre-training work-flows for a real-world energy system, in a demand response scenario. We apply a data-driven approach, covering all aspects from raw monitoring data to the trained algorithm. The considered energy system consists of two compression chillers and an ice storage. The objective of the control task is to charge and discharge the storage with respect to dynamic constraints. We use machine learning models of the energy system to train and evaluate a state-of-the-art RL algorithm (DQN) under five different pre-training strategies. We compare, online and offline training and initialization of the RL controller together with a guiding RBC. We demonstrate that offline training with a guiding RBC provides stable learning and a RL controller that always outperforms this guiding RBC. Unguided exploration on the other hand leads to higher accumulated cost savings. Based on our findings, we derive recommendations for practical application and future research questions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.