Abstract
With the rapid development and wide application of neural networks in various domains including safety-critical systems, it is more and more important to investigate formal methods to provide strict guarantees on their behavior. As formal verification of a neural network has high computational complexity, caused by the nonlinear nature of activation functions such as ReLU, it is vital to improve the efficiency to solve verification problems. In this work, we propose a complete verification method for neural networks by means of neuron branching and linear programming (LP) abstraction. Specifically, the approach of branch and bound is adopted and a branching strategy is developed to divide a complex verification problem into sub-problems with smaller search space. And for each sub-problem, a part of ReLU neurons are abstracted with LP constraints and the others are accurately encoded with mixed integer linear programming (MILP) constraints, and an abstraction strategy is deployed to guide such mixed LP/MILP encoding so that the problem can be solved with reduced complexity. The method is implemented as a tool named BAVerify (Branching and Abstraction based Verification of neural networks), which is evaluated through experiments on benchmark network models and their verification problems. Experimental results show that BAVerify achieves a 43.13% advantage in verification efficiency compared with state-of-the-art complete methods for formal verification of neural networks.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have