Abstract

Deep reinforcement learning (DRL) control methods have shown great potential for optimal HVAC control, but they require significant time and data to learn effective policies. By employing transfer learning (TL) with pre-trained models, the need to learn the data from scratch is avoided, saving time and resources. However, there are two main critical issues with this approach: the inappropriate selection of the source domain resulting in worse control performance and inefficient utilization of multi-source domain control experience. To address these challenges, a multi-source transfer learning and deep reinforcement learning (MTL-DRL) integrated framework is proposed for efficient HVAC system control. In order to select appropriate source domains, the contribution of various source domains to the target task is quantified first, followed by a comprehensive evaluation of transfer performance based on average energy consumption and average temperature deviation. The well-pretrained DRL parameters from the optimal multi-source transfer set are then sequentially transferred to the target DRL controller. Results from a series of transfer experiments between buildings with different thermal zones and weather conditions indicate that the MTL-DRL framework significantly reduces the training time of HVAC control, with improvements of up to 20% compared to DRL baseline models trained from scratch. Additionally, the MTL-DRL method leads to reductions in average energy consumption ranging from 1.43% to 3.12% and average temperature deviation up to 14.32%. The impact of the source domain transfer sequence on the performance of the DRL-based control method is also discussed. Overall, the proposed framework presents a promising solution for enhancing DRL-based HVAC control methods by reducing training time and energy consumption while maintaining occupants’ comfort.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.