Abstract

As energy demand continues to increase, demand response (DR) programs in the electricity distribution grid are gaining momentum and their adoption is set to grow gradually over the years ahead. Demand response schemes seek to incentivise consumers to use green energy and reduce their electricity usage during peak periods which helps support grid balancing of supply-demand and generate revenue by selling surplus of energy back to the grid. This paper proposes an effective energy management system for residential demand response using Reinforcement Learning (RL) and Fuzzy Reasoning (FR). RL is considered as a model-free control strategy which learns from the interaction with its environment by performing actions and evaluating the results. The proposed algorithm considers human preference by directly integrating user feedback into its control logic using fuzzy reasoning as reward functions. Q-learning, a RL strategy based on a reward mechanism, is used to make optimal decisions to schedule the operation of smart home appliances by shifting controllable appliances from peak periods, when electricity prices are high, to off-peak hours, when electricity prices are lower without affecting the customer’s preferences. The proposed approach works with a single agent to control 14 household appliances and uses a reduced number of state-action pairs and fuzzy logic for rewards functions to evaluate an action taken for a certain state. The simulation results show that the proposed appliances scheduling approach can smooth the power consumption profile and minimise the electricity cost while considering user’s preferences, user’s feedbacks on each action taken and his/her preference settings. A user-interface is developed in MATLAB/Simulink for the Home Energy Management System (HEMS) to demonstrate the proposed DR scheme. The simulation tool includes features such as smart appliances, electricity pricing signals, smart meters, solar photovoltaic generation, battery energy storage, electric vehicle and grid supply.

Highlights

  • Greenhouse gas emissions are posing a serious concern across the world due to their negative impacts on the environment and climate change

  • This paper proposes an effective energy management system for residential demand response using Reinforcement Learning (RL) and Fuzzy Reasoning (FR)

  • Q-learning, a RL strategy based on a reward mechanism, is used to make optimal decisions to schedule the operation of smart home appliances by shifting controllable appliances from peak periods, when electricity prices are high, to off-peak hours, when electricity prices are lower without affecting the customer’s preferences

Read more

Summary

INTRODUCTION

Greenhouse gas emissions are posing a serious concern across the world due to their negative impacts on the environment and climate change. Price-based programs, on the other hand, can be considered as indirect means for controlling customers’ loads Using these programs, time-varying prices are offered to customers based on electricity cost at different time periods. HEMS can be considered as the enabling technology for realizing the potential of DR strategies and enable consumers to improve the energy usage and minimise electricity bills by shifting and curtailing their loads in response to electricity tariffs during peak periods without compromising their lifestyle and preferences [3], [5], [9], [10]. In [14], the author proposed a Hybrid Genetic Particle Swarm Optimisation (HGPO) to schedule the appliances of a house with local generation from Renewable Energy Sources (RES) This algorithm attempts to minimise electricity bills without considering consumer’s preferences.

DESCRIPTION OF THE HEMS ARCHITECHTURE AND FUNCTIONALITIES
AND DISCUSSION
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call