Abstract

Neural networks (NNs) have witnessed widespread deployment across various domains, including some safety-critical applications. In this regard, the demand for verifying means of such artificial intelligence techniques is more and more pressing. Nowadays, the development of evaluation approaches for NNs is a hot topic that is attracting considerable interest, and a number of verification methods have been proposed. Yet, a challenging issue for NN verification is pertaining to the scalability when some NNs of practical interest have to be evaluated. This work aims to present INNAbstract, an abstraction method to reduce the size of NNs, which leads to improving the scalability of NN verification and reachability analysis methods. This is achieved by merging neurons while ensuring that the obtained model (i.e., abstract model) overapproximates the original one. INNAbstract supports networks with numerous activation functions. In addition, we propose a heuristic for nodes' selection to build more precise abstract models, in the sense that the outputs are closer to those of the original network. The experimental results illustrate the efficiency of the proposed approach compared to the existing relevant abstraction techniques. Furthermore, they demonstrate that INNAbstract can help the existing verification tools to be applied on larger networks while considering various activation functions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.