Abstract

Existing methods for function smoothness in neural networks have limitations. These methods can make training sensitive to their hyperparameters, or their smoothness constraints can limit model capacity. These methods can impose too much smoothness, even in areas without data, or they can impose non-meaningful smoothness constraints. The way these methods measure smoothness can also be computationally hard. One of the main methods for function smoothness, Lipschitz continuity, does not even imply differentiability theoretically, let alone continuous differentiability, that is smoothness. In this paper, we propose a method based on the theoretical definition of the derivative to ensure that the derivative of the parametrized function should tend toward its theoretical value for the given neural network parameters in the vicinity of training samples. The method changes the classifier and its training minimally and has no added hyperparameters. The proposed method is shown to achieve a smoother function in the vicinity of both training and testing samples for all tested datasets, as measured with decreased values of the Frobenius norm of the Jacobian with respect to inputs. Due to the correlation between function smoothness and generalization, the method makes classifiers generalize better and achieve higher accuracy than default classifiers for Restricted ImageNet, CIFAR10 and MNIST. Due to the correlation between function smoothness and adversarial robustness, the proposed method makes classifiers with high-capacity architecture more robust to adversarial samples generated with the PGD attack compared to default classifiers for the Restricted ImageNet, CIFAR10, Fashion-MNIST and MNIST datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.