Abstract

Safety is a critical concern when applying reinforcement learning (RL) to real-world control tasks. However, existing safe RL works either only consider expected safety constraint violations and fail to maintain safety guarantees, or use overly conservative safety certificate tools borrowed from safe control theory, which sacrifices reward optimization and relies on analytic system models. This letter proposes a model-free safe RL algorithm that achieves near-zero constraint violations with high rewards. Our key idea is to jointly learn a policy and a neural barrier certificate under stepwise state constraint setting. The barrier certificate is learned in a model-free manner by minimizing the violations of appropriate barrier properties on transition data collected by the policy. We extend the single-step invariant property of the barrier certificate to a multi-step version and construct the corresponding multi-step invariant loss. This loss balances the bias and variance of the barrier certificate and enhances both the safety and performance of the policy. The policy is optimized under the constraint of the multi-step invariant property using the Lagrangian method. We optimize the policy in a model-free manner by introducing an importance sampling weight in the constraint. We test our algorithm on multiple problems, including classic control tasks, robot collision avoidance, and autonomous driving. Results show that our algorithm achieves near-zero constraint violations and high performance compared to the baselines. Moreover, the learned barrier certificates successfully identify the feasible regions on multiple tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call