Abstract

The demand for energy around the world continues to increase at a very high rate. To sufficiently supply this high demand, it is imperative to employ efficient methods so that the total costs for fulfilling such high demand in energy are minimized. To achieve this ambitious goal, this paper proposes a multi-agent reinforcement learning system for time of use pricing based combined demand response and voltage control. For this purpose, a long short term memory network is employed for day-ahead load forecasting in order to remove future uncertainties. The Q-learning algorithm is used which is a model free algorithm and hence, doesn’t require the agent(s) to have prior knowledge of the environment. The role of reinforcement learning in this work is very important since it allows the agent(s) to determine their respective optimal behavior(s) autonomously without explicit training by the end user. To allow effective cooperation among multiple agents, each household is controlled by its own agent, whereas all the household agents are directed by a master agent or service provider. Accordingly, the voltage control agent serves the purpose of checking voltage level violations in the system and removing them through optimal decision making. The proposed system yields very good results, whereby, not only is the overall cost of electricity reduced, but voltage level violations are also removed from the entire system. The implementation of this mechanism reduces the total average aggregated load demand from 5.23 kW to 3.86 kW, while reducing the total aggregated average cost from 94.01 Rs to 60.80 Rs, thanks to the proposed effective multi-agent based system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call