Abstract

We introduce a model for agent-environment systems where the agents are implemented via feed-forward ReLU neural networks and the environment is non-deterministic. We study the verification problem of such systems against CTL properties. We show that verifying these systems against reachability properties is undecidable. We introduce a bounded fragment of CTL, show its usefulness in identifying shallow bugs in the system, and prove that the verification problem against specifications in bounded CTL is in coNExpTime and PSpace-hard. We introduce sequential and parallel algorithms for MILP-based verification of agent-environment systems, present an implementation, and report the experimental results obtained against a variant of the VerticalCAS use-case and the frozen lake scenario.

Highlights

  • Forthcoming autonomous and robotic systems, including autonomous vehicles, are expected to use machine learning (ML) methods for some of their components

  • In the rest of the section, we focus on bounded CTL, where we develop a decision procedure for the verification problem based on producing a single mixed-integer linear programming [56] (MILP) and checking its feasibility

  • While the benefits of formal methods have long been recognised, and they have found large adoption in safetycritical systems as well as in industrial-scale software, there have been few efforts to introduce verification techniques for systems driven by neural networks

Read more

Summary

Introduction

Forthcoming autonomous and robotic systems, including autonomous vehicles, are expected to use machine learning (ML) methods for some of their components. From more conventional AI systems that are programmed directly by engineers, components based on ML are synthesised from data and implemented via neural networks. Employing ML components has considerable attractions in terms of performance (e.g., image classifiers), and, sometimes, ease of realisation (e.g., non-linear controllers). It raises concerns in terms of overall system safety. It is known that neural networks, as presently used, are fragile and hard to understand [52]

Page 2 of 36
Related work
Feed‐forward ReLU networks
Page 6 of 36
Neural agent‐environment systems
Page 8 of 36
Page 10 of 36
Page 12 of 36
The verification problem
Unbounded CTL
Page 14 of 36
Bounded CTL
Page 16 of 36
Monolithic encoding
Page 18 of 36
Compositional encoding
Page 20 of 36
Computational complexity of the verification problem
Page 24 of 36
Implementation and experiments
Page 26 of 36
FrozenLake scenario
The aircraft collision avoidance system VerticalCAS
Page 28 of 36
NANES encoding and specification
Page 30 of 36
Conclusions
Page 34 of 36
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.