Abstract

The heating, ventilation, and air-conditioning (HVAC) system account for substantial energy use in buildings, whereas a large group of occupants is still not actually feeling comfortable staying inside. This poses the issue of developing energy-efficient HVAC control, i.e., reduce energy use (cost) while simultaneously enhancing human comfort. This brief pursues the objective and studies the stochastic optimal HVAC control subject to uncertain thermal demand (i.e., the weather and occupancy). Particularly, we involve the elaborate predicted mean vote (PMV) thermal comfort model in the optimization. The problem is computationally challenging due to the nonlinear and nonanalytical constraints imposed by the system dynamics and PMV model. We make the following contributions to address it. First, we formulate the problem as a Markov decision process (MDP) which is a desirable modeling technique capable of handling the complexities. Second, we propose a gradient-based learning (GB-L) method for progressively learning a stochastic control policy off-line and store it for on-line execution. Third, we prove the learning method’s converge to the optimal policies theoretically, and its performance (i.e., energy cost, thermal comfort, and on-line computation) for HVAC control via simulations. The comparisons with the existing model predictive control based relaxation (MPC-R) method which is assumed with accurate future information and supposed to provide the near-optimal bounds show that though there exists some discount in energy cost reduction (i.e., 6.5%), the proposed method can enable efficient on-line implementation (less than 1 s) and provide a high probability of thermal comfort under uncertainties.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call