Abstract

This study introduces a novel Demand Response (DR) Model based on Multi-agent Reinforcement Learning (MARL-DR), comprised of a pricing and incentives scheme aimed at improving the accuracy of existing demand response strategies. Furthermore, this new approach represents a flexibility solution to prevent sharp price variations caused by the high penetration of unconventional renewable energy, which are directly passed on to end-users. The model employs a cooperative-competitive MARL-DR technique based on Q-learning, with the goal of determining optimal prices and incentives that maximize benefits for both customers and electric Service Provider (SP). In this regard, the model has the capability to offer customers pricing options in both real-time and time-of-use, to actively adjust each user's consumption. Additionally, through demand characterization factors, such as the coincidence factor (CF), the of typical user behavior is improved, and the influence of individual user demand on system peak demand is more accurately detected. It is also demonstrated that the cooperative-competitive approach offers better performance compared to other approaches. Finally, a sensitivity analysis is presented at various stages of the model to verify its accuracy and efficiency in pricing and incentives formulation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call