Abstract

Recently, deep reinforcement learning has emerged as a popular approach for enhancing thermal energy management in buildings due to its flexibility and model-free nature. However, the time-consuming convergence of deep reinforcement learning poses a challenge. To address this, offline pre-training of deep reinforcement learning controllers using physics-based simulation environments has been commonly employed. However, developing these models requires significant effort and expertise. Alternatively, data-driven models offer a promising solution by emulating building dynamics, but they struggle to predict previously unseen patterns. Therefore, this paper introduces a strategy to effectively train and deploy a deep reinforcement learning controller by means of long short-term memory neural networks. The experiments were carried out using an EnergyPlus simulation environment as a proxy of a real building. An automatic and recursive procedure is designed to determine the minimum amount of historical data required to train a robust data-driven model which mimics building dynamics. The trained deep reinforcement learning agent meets safety requirements in the simulation environment after two and a half months of training. Additionally, it reduces indoor temperature violations by 80% while consuming the same amount of energy as a baseline rule-based controller.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call