Governments are currently subsidizing growth in the electric car market and the associated infrastructure in order to accelerate the transition to more sustainable mobility. To avoid the grid overload that results from simultaneously charging too many electric vehicles, there is a need for smart charging coordination systems. In this paper, we propose a charging coordination system based on Reinforcement Learning using an artificial neural network as a function approximator. Taking into account the baseload present in the power grid, a central agent creates forward-looking, coordinated charging schedules for an electric vehicle fleet of any size. In contrast to optimization-based charging strategies, system dynamics such as future arrivals, departures, and energy consumption do not have to be known beforehand. We implement and compare a range of parameter variants that differ in terms of the reward function and prioritized experience. Subsequently, we use a case study to compare our Reinforcement Learning algorithm with several other charging strategies. The Reinforcement Learning-based charging coordination system is shown to perform very well. All electric vehicles have enough energy for their next trip on departure and charging is carried out almost exclusively during the load valleys at night. Compared with an uncontrolled charging strategy, the Reinforcement Learning algorithm reduces the variance of the total load by 65%. The performance of our Reinforcement Learning concept comes close to that of an optimization-based charging strategy. However, an optimization algorithm needs to know certain information beforehand, such as the vehicle’s departure time and its energy requirement on arriving at the charging station. Our novel Reinforcement Learning-based charging coordination system therefore offers a flexible, easily adaptable, and scalable approach for an electric vehicle fleet under realistic operating conditions.
Read full abstract