Abstract

Advanced controls could enhance buildings’ energy efficiency and operational flexibility while guaranteeing the indoor comfort. The control performance of reinforcement learning (RL) and model predictive control (MPC) have been widely studied in the literature. However, in existing studies, the reinforcement learning and model predictive control are tested in separate environments, making it challenging to directly compare their performance. In this paper, RL and MPC controls are implemented and compared with traditional rule-based controls in an open-source virtual environment to control a heat pump system of a residential house. The RL controllers were developed with three widely-used algorithms: Deep Deterministic Policy Gradient (DDPG), Dueling Deep Q Networks (DDQN), and Soft Actor Critic (SAC), and the MPC controller was developed using reduced-order thermal resistance-capacity network model. The building optimization testing (BOPTEST) framework is employed as a standardized virtual building simulator to conduct this study. The test case BOPTEST Hydronic Heat Pump is selected for the assessment and benchmarking of the control performance, data efficiency, implementation efforts and computational demands of the RL and MPC controllers. The comparison results revealed that for the RL controllers, only the DDPG algorithm outperforms the baseline controller in both the typical and peak heating scenarios. The MPC controller is superior to the RL and baseline controllers in both two scenarios because it can take the best possible action based on the current system state even with a model that deviates to a certain degree from reality. The findings of this study shed light on the selection of advanced building controllers among two promising candidates: MPC and RL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call