In an optimization based control approach for solar microgrid energy management, consumer as an agent continuously interacts with the environment and learns to take optimal actions autonomously to reduce the power consumption from grid. Learning is built in directly into the consumer's behaviour so that he can decide and act in his own interest for optimal scheduling. The consumer evolves by interacting with the influencing variables of the environment. We consider a grid-connected solar microgrid system which contains a local consumer, a renewable generator (solar photovoltaic system) and a storage facility (battery). A model-free Reinforcement Learning algorithm, namely three-step-ahead Q-learning, is used to optimize the battery scheduling in dynamic environment of load and available solar power. Solar power and the load feed the reinforcement learning algorithm. By increasing the utility of battery and the solar power generator, an optimal performance of solar microgrid is achieved. Simulation results using real numerical data are presented for a reliability test of the system. The uncertainties in the solar power and the load are taken into account in the proposed control framework. Index Terms—Solar microgrid; Reinforcement learning; Q- learning; Battery scheduling; Optimization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call