Abstract

The extensive penetration of distributed energy resources (DERs), particularly electric vehicles (EVs), creates a huge challenge for the distribution grids due to the limited capacity. An approach for smart charging might alleviate this issue, but most of the optimization algorithms has been developed so far under an assumption of knowing the future, or combining it with complicated forecasting models. In this paper we propose to use reinforcement learning (RL) with replaying past experience to optimally operate an EV charger. We also introduce explorative rewards for better adjusting to environment changes. The reinforcement learning agent controls the charger’s power of consumption to optimize expenses and prevent lines and transformers from being overloaded. The simulations were carried out in the IEEE 13 bus test feeder with the load profile data coming from the residential area. To simulate the real availability of data, an agent is trained with only the transformer current and the local charger’s state, like state of the charge (SOC) and timestamp. Several algorithms, namely Q-learning, SARSA, Dyna-Q and Dyna-Q+ are tested to select the best one to utilize in the stochastic environment and low frequency of data streaming.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call