Abstract

Reinforcement learning (RL)-based energy management is one of the current hot spots of hybrid electric vehicles. Recent advances in RL-based energy management focus on energy-saving performance but less considers the constrained setting for training safety. This article proposes an RL framework named coach-actor-double-critic (CADC) for the optimization of energy management considered as the constrained Markov decision process (CMDP). A bilevel onboard controller includes a neural network (NN)-based strategy actor and rule-based strategy coach for online energy management. Once the output of the actor exceeds the constrained range of feasible solutions, the coach would take charge of energy management to ensure safety. By using the Lagrangian relaxation, the optimization for CMDP transforms into an unconstrained dual problem to minimize the energy consumption while minimizing the coach participation. The parameters of the actor are updated in a manner of policy gradient through RL training with the Lagrangian value function. Double-critic with the same structure synchronously estimates the value function to avoid overestimate bias. Several experiments with the bus trajectories data demonstrate the optimality, self-learning ability, and adaptability of CADC. The results indicate that CADC outperforms the existing RL-based strategies and reaches above 95% energy-saving rate of the off-line global optimum.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call