Abstract

The charging load modelling of electric vehicles (EVs) is of great importance for safe and stable operation of power systems. However, it is difficult to use the traditional Monte Carlo method and mathematical optimization methods to establish a detailed and precise charging load model for EVs in both the temporal and spatial scales, especially for plug-in electric taxis (PETs) due to its strong random characteristics and complex operation behaviors. In order to solve this problem, multiple agents and the multi-step Q(λ) learning are utilized to model the charging loads of PETs in both the temporal and spatial scales. Firstly, a multi-agent framework is developed based on java agent development framework (JADE), and a variety of agents are built to simulate the operation related players, as well as the operational environment. Then, the multi-step Q(λ) learning is developed for PET Agents to make decisions under various situations and its performances are compared with the Q-learning. Simulation results illustrate that the proposed framework is able to dynamically simulate the PET daily operation and to obtain the charging loads of PETs in both the temporal and spatial scales. The multi-step Q(λ) learning outperforms Q-learning in terms of convergence rate and reward performance. Moreover, the PET shift strategies and electricity pricing mechanisms are investigated, and the results indicate that the appropriate operation rules of PETs significantly improve the safe and reliable operation of power systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call