Abstract

Recent advances in learning-based perception systems have led to drastic improvements in the performance of robotic systems like autonomous vehicles and surgical robots. These perception systems, however, are hard to analyze and errors in them can propagate to cause catastrophic failures. In this paper, we consider the problem of synthesizing safe and robust controllers for robotic systems which rely on complex perception modules for feedback. We propose a counterexample-guided synthesis framework that iteratively builds simple surrogate models of the complex perception module and enables us to find safe control policies. The framework uses a falsifier to find counterexamples, or traces of the systems that violate a safety property, to extract information that enables efficient modeling of the perception modules and errors in it. These models are then used to synthesize controllers that are robust to errors in perception. If the resulting policy is not safe, we gather new counterexamples. By repeating the process, we eventually find a controller which can keep the system safe even when there is a perception failure. We demonstrate our framework on two scenarios in simulation, namely lane keeping and automatic braking, and show that it generates controllers that are safe, as well as a simpler model of a deep neural network-based perception system that can provide meaningful insight into operations of the perception system.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.