AbstractSmart buildings have great potential for shaping an energy-efficient, sustainable, and more economic future for our planet as buildings account for approximately 40% of the global energy consumption. Future of the smart buildings lies in using sensory data for adaptive decision making and control that is currently gloomed by the key challenge of learning a good control policy in a short period of time in an online and continuing fashion. To tackle this challenge, an event-triggered – as opposed to classic time-triggered – paradigm, is proposed in which learning and control decisions are made when events occur and enough information is collected. Events are characterized by certain design conditions and they occur when the conditions are met, for instance, when a certain state threshold is reached. By systematically adjusting the time of learning and control decisions, the proposed framework can potentially reduce the variance in learning, and consequently, improve the control process. We formulate the micro-climate control problem based on semi-Markov decision processes that allow for variable-time state transitions and decision making. Using extended policy gradient theorems and temporal difference methods in a reinforcement learning set-up, we propose two learning algorithms for event-triggered control of micro-climate in buildings. We show the efficacy of our proposed approach via designing a smart learning thermostat that simultaneously optimizes energy consumption and occupants’ comfort in a test building.
Read full abstract