Abstract

Despite the widespread application of deep neural networks in finance, medical treatment, and autonomous driving, these networks face multiple security threats, such as maliciously constructed adversarial samples that can easily mislead deep neural network model classification, causing errors. Therefore, creating an interpretable model or designing an interpretation method is necessary to improve its security. This paper presents an interpretation scheme, named Convergent Interpretation for Deep Neural Networks (CIDNN), to obtain a provably convergent and consistent interpretation for deep neural networks. The main idea of CIDNN is to first convert the deep neural networks into a set of mathematically convergent Piecewise Linear Neural Networks (PLNN), then convert the PLNN into a set of equivalent linear classifiers. In this way, each linear classifier can be interpreted by its decision features. By analyzing the convergence of the local approximation interpretation scheme, we prove that this interpretable model can be sufficiently close to the deep neural network with certain conditions. Experiments show the convergence of CIDNN’s interpretation, and the interpretation conforms with similar samples in the synthetic dataset. Besides, we demonstrate the semantical meaning of CIDNN in the Fashion-MNIST dataset.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.