Abstract

Deep neural networks have been successful in various domains, such as computer vision and natural language processing. On the other hand, researchers have discovered a vulnerability of convolutional neural networks to the samples with imperceptible perturbations, also known as, adversarial perturbations. It has been observed that adversarial perturbations can alter the predictions of a deep model. One of the most common approaches to increase the robustness of deep models is adversarial training. However, adversarial training often suffers from a degradation in generalization performance. In this study, orthogonal regularization is used along with adversarial training to facilitate both generalizability and adversarial robustness in deep models. According to the experiments on MNIST and CIFAR10 datasets, imposing orthogonality in weights improves both the generalization performance and adversarial robustness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call