Abstract

Energy management strategy (EMS) is one of the key technologies that improves the fuel efficiency of hybrid electric vehicles (HEVs) by governing the energy flow between the fuel tank and the electric energy storage. With the rapid development of artificial intelligence especially after the great success of AlphaGo, reinforcement learning (RL) has opened up a new window for EMS. Although many RL-based solutions have been successfully applied to EMS tasks, most current RL-based approaches only consider RL as an offline optimization tool, i.e., RL is used to solve optimization problems given a simulation model. This method may be utilized to realize optimal control in simulation, but there is no guarantee that a well-behaved performance can also be achieved in the real-physical world due to the simulation-to-real gap. Meanwhile, for many industrial EMS tasks especially when optimizing an existing controller, a high-fidelity simulator may be unavailable, but some logging data generated by some suboptimal existing controller and simple models of the principal powertrain components neglecting detailed component dynamics could exist. In this context, a hybrid algorithm combining data-driven and simulation-based RL approach is proposed that tries to learn experience from both the real logging data and simulated simple model. Based on the hardware-in-the-loop (HIL) results, a near optimal policy can be obtained (fuel consumption is decreased by approximately 6.10%) by this hybrid algorithm from a small batch logging data and a simple simulated model when compared with the dynamic programming (DP) method using a high-fidelity simulation model mimicking the real-physical world.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call