Abstract

A new decentralised demand response (DR) model relying on bi-directional communications is developed in this study. In this model, each user is considered as an agent that submits its bids according to the consumption urgency and a set of parameters defined by a reinforcement learning algorithm called Q-learning. The bids are sent to a local DR market, which is responsible for communicating all bids to the wholesale market and the system operator (SO), reporting to the customers after determining the local DR market clearing price. From local markets’ viewpoint, the goal is to maximise social welfare. Four DR levels are considered to evaluate the effect of different DR portions in the cost of the electricity purchase. The outcomes are compared with the ones achieved from a centralised approach (aggregation-based model) as well as an uncontrolled method. Numerical studies prove that the proposed decentralised model remarkably drops the electricity cost compare to the uncontrolled method, being nearly as optimal as a centralised approach.

Highlights

  • Maximum amount of power to be purchased at hour t total critical loads total load demand special action for Q-learning algorithm learning rate for Q-learning algorithm expected amount of electricity to buy in a day penalty factor electricity price once no need to supply controllable loads (€/kWh) sensitivity of the agents to pay for buying electricity price of selling power to customers from local demand response (DR) market's viewpoint (€/kWh) local DR market price (€/kWh)

  • Bids are sent to a local DR market where DR market price is cleared and after local DR market clearing price (LDRMCP) definition, it is communicated in the reverse direction to end-user

  • Inflexible loads are summarised with one relative maximum price, while flexible loads are formed in a stepwise shape based on the number of agents in the local DR market

Read more

Summary

Introduction

Maximum amount of power to be purchased at hour t (kW) total critical loads (kW) total load demand (kW) special action for Q-learning algorithm learning rate for Q-learning algorithm expected amount of electricity to buy in a day (kW) penalty factor electricity price once no need to supply controllable loads (€/kWh) sensitivity of the agents to pay for buying electricity price of selling power to customers from local DR market's viewpoint (€/kWh) local DR market price (€/kWh). With the existence of smart grid which include smart equipment such as advanced metering infrastructure and various communication facilities such as WiFi or Zigbee as well as Internet of Things potentials, customers are being able to accomplish two-way communications to utility for billing or monitoring [2]. All these facilities come up with the idea of considering customers as different DR agents who are able to bid actively in a competitive environment. As decentralisation aims to make decisions based on the local needs, it helps to avoid irregular functionality of the market due to wrong decisions might be made by a central controller within wholesale market

Literature review
Contributions
Organisation
Multi-agent system
Market-based control scheme
Demand bids
Optimum demand bidding
Local market
Market clearing formulation
Reinforcement learning
Case studies
Consumption profiles
Market clearing price
Average power cost
Agents’ behaviour
Reward
Centralised versus decentralised model
Conclusions
Centralised model
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.