Abstract

One crucial aspect that yet remains fairly unknown while can inform us about the behavior of deep neural networks is their decision boundaries. Trust can be improved once we understand how and why deep models carve out a particular form of decision boundary and thus make particular decisions. Robustness against adversarial examples is directly related to the decision boundary as adversarial examples are basically 'missed out' by the decision boundary between two classes. Investigating the decision boundary of deep neural networks, nevertheless, faces tremendous challenges. First, how we can generate instances near the decision boundary that are similar to real samples? Second, how we can leverage near decision boundary instances to characterize the behaviour of deep neural networks? Motivated to solve these challenges, we focus on investigating the decision boundary of deep neural network classifiers. In particular, we propose a novel approach to generate instances near decision boundary of pre-trained DNNs and then leverage these instances to characterize the behaviour of deep models.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.