Abstract

Reinforcement learning (RL) can improve control performance by seeking to learn optimal control policies in the end-use environment for vehicles and other systems. To accomplish this, RL algorithms need to sufficiently explore the state and action spaces. This presents inherent safety risks, and applying RL on safety-critical systems like vehicle powertrain control requires safety enforcement approaches. In this paper, we seek control-barrier function (CBF)-based safety certificates that demarcate safe regions where the RL agent could optimize the control performance. In particular, we derive optimal high-order CBFs that avoid conservatism while ensuring safety for a vehicle in traffic. We demonstrate the workings of the high-order CBF with an RL agent which uses a deep actor-critic architecture to learn to optimize fuel economy and other driver accommodation metrics. We find that the optimized high-order CBF allows the RL-based powertrain control agent to achieve higher total rewards without any crashes in training and evaluation while achieving better accommodation of driver demands compared to previously proposed exponential barrier function filters and model-based baseline controllers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call