Abstract

Vehicle-to-grid (V2G) technology is a promising solution to energy supply security issues associated with future electric grids. A decisive factor to successful V2G is effective electric vehicle (EV) charging management aimed at meeting travel demands with minimal charging costs, especially how to account for uncertainties and EV heterogeneity. In this study, a deep Q-network (DQN)-based reinforcement learning (RL) method is proposed to learn the optimal EV charging strategy considering empirical travel pattern heterogeneities and unpredictable electricity prices. The effectiveness and generalizability of the proposed DQN-based RL method was validated using actual five-million-km driving data in typical Chinese cities. In particular, EVs can save over 98% of the electricity cost without future electricity price information via the proposed method compared to the charging as-soon-as-possible method. The empirical experimental results also reveal that V2G-oriented charging management is sensitive to the charging/discharging power rate, electricity-price fluctuation frequency and range, and departure-time. We quantified the sensitivity with value of information (VOI) and found that: (1) Knowing the departure-time information can significantly reduce charging costs in most cases (average VOI: 5.4 CNY per charging/discharging session); (2) More historical data does not always lead to a higher electricity price VOI, and prices with sudden surges may even have a negative VOI.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call