Abstract

SummaryThis article presents a learning‐based barrier certified method to learn safe optimal controllers that guarantee operation of safety‐critical systems within their safe regions while providing an optimal performance. The cost function that encodes the designer's objectives is augmented with a control barrier function (CBF) to ensure safety and optimality. A damping coefficient is incorporated into the CBF which specifies the trade‐off between safety and optimality. The proposed formulation provides a look‐ahead and proactive safety planning and results in a smooth transition of states within the feasible set. That is, instead of applying an optimal controller and intervening with it only if the safety constraints are violated, the safety is planned and optimized along with the performance to minimize the intervention with the optimal controller. It is shown that addition of the CBF into the cost function does not affect the stability and optimality of the designed controller within the safe region. This formulation enables us to find the optimal safe solution iteratively. An off‐policy reinforcement learning (RL) algorithm is then employed to find a safe optimal policy without requiring the complete knowledge about the system dynamics, while satisfies the safety constraints. The efficacy of the proposed safe RL control design approach is demonstrated on the lane keeping as an automotive control problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call