Abstract

The demand response (DR) program of a traditional home energy management system (HEMS) usually controls or schedules appliances to monitor energy usage, minimize energy cost, and maximize user comfort. In this study, instead of interfering with appliances and changing residents’ behavior, the proposed hour-ahead DR strategy first learns the appliance usage behavior of residents; subsequently, based on this knowledge, it silently controls the energy storage system (ESS) and renewable energy system (RES) to minimize the daily energy cost. To accomplish the goal, the proposed deep neural networks (DNNs) of this DR approximate the MILP optimization using supervised learning. The training datasets are created from the optimal outputs of an MILP solver using historical data. After training, in each time slot, these DNNs are used to control the ESS and RES using the real-time data of the surrounding environment. For comparison, we develop two different strategies, namely, the multi-agent reinforcement learning-based strategy, which is an hour-ahead strategy, and the forecast-based MILP strategy, which is a day-ahead strategy. For evaluation and verification, the proposed approaches are applied to three different real-world homes with real-world real-time global horizontal irradiation and prices. Numerical results verify the effectiveness and superiority of the proposed MILP-based supervised learning strategy, in terms of the daily energy cost.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call