Abstract

Current research on artificial intelligence (AI) algorithms in safety–critical areas remains extremely challenging due to their inability to be fully verified at design time. In this paper, we propose an RTA-IR architecture, which bypasses the formal verification of the AI algorithm by incorporating runtime assurance (RTA) and provides safety assurances for the AI controllers of complex autonomous vehicles (such as those obtained using neural networks) without excessive performance sacrifice. RTA-IR consists of a high-performance and unproven advanced controller and two verifiable safety controllers and a decision module designed based on the Responsibility Sensitive Safety Model (RSS). The advanced controller is designed based on attention generating adversarial imitation learning(GAIL), which can imitate the efficient policies of experts from a set of expert demonstrations. RSS provides verifiable safety criteria and switching logic for the decision module, and RTA-IR provides safety for autonomous vehicles when the advanced controller produces unsafe control, as well as restoring control of the vehicle by the advanced controller under conditions that confirm safety. We tested and evaluated RTA-IR separately for two levels of traffic density in one driving task. Experiments have shown that RTA-IR exhibits superior performance in terms of both safety and efficiency compared to the baseline method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call