Abstract

Large-scale renewable photovoltaic (PV) and battery energy storage system (BESS) units are promising to be significant electricity suppliers in the future electricity market. A bidding model is proposed for PV-integrated BESS power plants in a pool-based day-ahead (DA) electricity market, in which the uncertainty of PV generation output is considered. In the proposed model, we consider the market clearing process as the external environment, while each agent updates the bid price through the communication with the market environment for its revenue maximization. A multiagent reinforcement learning (MARL) called win-or-learn-fast policy-hill-climbing (WoLF-PHC) is used to explore optimal bid prices without any information of opponents. The case study validates the computational performance of WoLF-PHC in the proposed model, while the bidding strategy of each participated agent is thereafter analyzed.

Highlights

  • The share of photovoltaic (PV) installations experiences an exponential growth worldwide and accounts for most of the electricity supply of renewable energy (Zucker and Hinchliffe, 2014)

  • The equilibrium of such model is often difficult to be obtained because of the computational burden, and the complexity of these models increases with consideration of numerous complicated real-world assumptions and constraints (Ventosa et al, 2005)

  • We propose a DA bidding strategy of PV-attached battery energy storage system (BESS) power plants to maximize their benefits by self-bidding not relied on any information of competitors

Read more

Summary

A Learning-Based Bidding Approach for PV-Attached BESS Power Plants

Reviewed by: Xueqian Fu, China Agricultural University, China Lei Gan, Hohai University, China. Large-scale renewable photovoltaic (PV) and battery energy storage system (BESS) units are promising to be significant electricity suppliers in the future electricity market. A bidding model is proposed for PV-integrated BESS power plants in a pool-based day-ahead (DA) electricity market, in which the uncertainty of PV generation output is considered. We consider the market clearing process as the external environment, while each agent updates the bid price through the communication with the market environment for its revenue maximization. A multiagent reinforcement learning (MARL) called win-or-learn-fast policy-hill-climbing (WoLF-PHC) is used to explore optimal bid prices without any information of opponents. The case study validates the computational performance of WoLF-PHC in the proposed model, while the bidding strategy of each participated agent is thereafter analyzed

INTRODUCTION
Introduction to Multiagent Reinforcement Learning
CONCLUSION
DATA AVAILABILITY STATEMENT
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call