Abstract
Residential Heating, Ventilation, and Air conditioning (HVAC) systems are responsible for a significant amount of energy consumption, but their management is challenging due to the complexities of building thermodynamics and human activities. Reinforcement learning (RL) has been adopted to tackle this issue, but traditional RL methods require massive training data, long learning periods, and frequent equipment adjustments. To address these issues, we construct a new event-driven Markov decision process (ED-MDP) framework, which enables adjustments of control policies triggered by events, reducing unnecessary operations. Moreover, we propose an event-driven deep Q network (ED-DQN) method, which optimizes the action selection based on the triggered events. In the HVAC control problem, the proposed ED-DQN can effectively capture dynamic non-linear features of thermal comfort, and reduce the equipment damage caused by frequent adjustments. Our experimental results show that compared to three benchmark methods and three RL methods, our ED-DQN achieved state-of-the-art performance in both energy saving and thermal comfort violations. Moreover, our method demonstrates promising performance when applied to new test thermal environments, indicating its robustness and adaptability for optimizing residential HVAC controls.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have