Abstract

Formal verification of neural networks is critical for their safe adoption in real-world applications. However, designing a precise and scalable verifier which can handle different activation functions, realistic network architectures and relevant specifications remains an open and difficult challenge. In this paper, we take a major step forward in addressing this challenge and present a new verification framework, called PRIMA. PRIMA is both (i) general: it handles any non-linear activation function, and (ii) precise: it computes precise convex abstractions involving multiple neurons via novel convex hull approximation algorithms that leverage concepts from computational geometry. The algorithms have polynomial complexity, yield fewer constraints, and minimize precision loss. We evaluate the effectiveness of PRIMA on a variety of challenging tasks from prior work. Our results show that PRIMA is significantly more precise than the state-of-the-art, verifying robustness to input perturbations for up to 20%, 30%, and 34% more images than existing work on ReLU-, Sigmoid-, and Tanh-based networks, respectively. Further, PRIMA enables, for the first time, the precise verification of a realistic neural network for autonomous driving within a few minutes.

Highlights

  • The growing adoption of neural networks (NNs) in many safety critical domains highlights the importance of providing formal, deterministic guarantees about their safety and robustness when deployed in the real world [Szegedy et al 2014]

  • 43:13 intersection (blue in (d)) of the two H -representations (grey in (a)) is recovered despite the union of the input V-representations (green and red in (a)) not covering it. This is due to the synergy between Partial Double Description (PDD) and Partial Double Description Method (PDDM): the under-approximate V-representation of the first polytope is intersected with the exact H -representation of the second one and vice versa

  • Computing approximations with Split-Bound-Lift Method (SBLM) using PDDM has two main advantages compared to the direct convex hull approach: It is significantly faster and produces fewer constraints, making the resulting linear programming (LP) easier to solve, while barely losing any precision

Read more

Summary

INTRODUCTION

The growing adoption of neural networks (NNs) in many safety critical domains highlights the importance of providing formal, deterministic guarantees about their safety and robustness when deployed in the real world [Szegedy et al 2014]. This coarse input abstraction effectively restricts their approach to interactions over a single affine layer at a time While both approaches currently yield state-of-the-art precision, they are limited to ReLU activations and lack scalability as they require small instances of the NP-hard convex hull problem to be solved exactly or large instances to be solved partially. The key technical contributions of our work are: (i) PDDM (Partial Double Description Method) ś a general, precise, and fast convex hull approximation method for polytopes that enables the consideration of many neuron groups, and (ii) SBLM (Split-Bound-Lift Method) ś a novel decomposition approach that builds upon the PDDM to quickly compute multi-neuron constraints While we combine these methods with abstraction refinement approaches in Prima, we note that they are of general interest (beyond neural networks) and can be used independently of each other. We release our code as part of the open-source framework ERAN at https://github.com/eth-sri/eran

BACKGROUND
Neural Network Verification
Overview of Convex Polyhedra
OVERVIEW OF PRIMA
Split-Bound-Lift Method
Layerwise Abstraction
THE PARTIAL DOUBLE DESCRIPTION METHOD
Intersection
Enforcing A-Irredundancy
Formal Guarantees
Result
SPLIT-BOUND-LIFT METHOD Algorithm 3
Prerequisites
Splitting the Input Polytope
Lifting
Instantiation for Various Functions y
PRIMA VERIFICATION FRAMEWORK
Abstraction Refinement Approaches
Abstraction Refinement Cascade
EXPERIMENTAL EVALUATION
Experimental Setup
Benchmarks
20 DeepPoly
Image Classification with ReLU Activation
Parameter Study
Effect of Grouping Strategy
Image Classification with Tanh and Sigmoid Activations
Autonomous Driving
Effectiveness of SBLM and PDDM for Convex Hull Computations
RELATED WORK
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.