Abstract

AbstractThe safety of autonomous driving systems (ADS) with machine learning (ML) components is threatened by adversarial examples. The mainstream defending technique against such threats concerns the adversarial examples that make the ML model fail. However, such an adversarial example does not necessarily cause safety problems for the entire ADS. Therefore a method for detecting the adversarial examples that will lead the ADS to unsafe states will be helpful to improve the defending technique. This paper proposes an approach to detect such safety-critical adversarial examples in typical autonomous driving scenarios based on the model checking technique. The scenario of autonomous driving and the semantic effect of adversarial attacks on object detection is specified with the Network of Timed Automata model. The safety properties of ADS are specified and verified through the UPPAAL model checker to show whether the adversarial examples lead to safety problems. The result from the model checking can reveal the critical time interval of adversarial attacks that will lead to an unsafe state for a given scenario. The approach is demonstrated on a popular adversarial attack algorithm in a typical autonomous driving scenario. Its effectiveness is shown through a series of simulations on the CARLA platform.KeywordsAutonomous drivingModel checkingAdversarial examples

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call