Abstract
This paper presents the Neural Network Verification (NNV) software tool, a set-based verification framework for deep neural networks (DNNs) and learning-enabled cyber-physical systems (CPS). The crux of NNV is a collection of reachability algorithms that make use of a variety of set representations, such as polyhedra, star sets, zonotopes, and abstract-domain representations. NNV supports both exact (sound and complete) and over-approximate (sound) reachability algorithms for verifying safety and robustness properties of feed-forward neural networks (FFNNs) with various activation functions. For learning-enabled CPS, such as closed-loop control systems incorporating neural networks, NNV provides exact and over-approximate reachability analysis schemes for linear plant models and FFNN controllers with piecewise-linear activation functions, such as ReLUs. For similar neural network control systems (NNCS) that instead have nonlinear plant models, NNV supports over-approximate analysis by combining the star set analysis used for FFNN controllers with zonotope-based analysis for nonlinear plant dynamics building on CORA. We evaluate NNV using two real-world case studies: the first is safety verification of ACAS Xu networks, and the second deals with the safety verification of a deep learning-based adaptive cruise control system.
Highlights
Deep neural networks (DNNs) have quickly become one of the most widely used tools for dealing with complex and challenging problems in numerous domains, such as image classification [10,16,25], function approximation, and natural language translation [11,18]
Neural Network Verification (NNV) provides a set of reachability algorithms that can compute both the exact and over-approximate reachable sets of DNNs and neural network control systems (NNCS) using a variety of set representations such as polyhedra [40,53–56], star sets [29,38,39,41], zonotopes [32], and abstract domain representations [33]
NNV can compute both the exact and over-approximate reachable sets of the adaptive cruise control (ACC) system in bounded time steps, while for nonlinear dynamics, NNV constructs an over-approximation of the reachable sets
Summary
Deep neural networks (DNNs) have quickly become one of the most widely used tools for dealing with complex and challenging problems in numerous domains, such as image classification [10,16,25], function approximation, and natural language translation [11,18]. Components Plant dynamics (for NNCS) Discrete/Continuous (for NNCS) Activation functions CNN Layers Reachability methods Reachable set/Flow-pipe Visualization Parallel computing Safety verification Falsification Robustness verification (for FFNN/CNN) Counterexample generation. ReLU, Satlin, Sigmoid, Tanh MaxPool, Conv, BN, AvgPool, FC Star, Zonotope, Abstract-domain, ImageStar Yes. NNV can construct a complete set of counter-examples demonstrating the set of all possible unsafe initial inputs and states by using the star-based exact reachability algorithm [38,41]. NNV has been successfully applied to safety verification and robustness analysis of several real-world DNNs, primarily feedforward neural networks (FFNNs) and convolutional neural networks (CNNs), as well as learning-enabled CPS. The first compares methods for safety verification of the ACAS Xu networks [21], and the second presents safety verification of a learning-based adaptive cruise control (ACC) system
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.