Abstract

Neural networks are vulnerable to adversarial input perturbations imperceptible to human, which calls for robust machine learning for safety-critical applications. In this paper, we propose a new neural ODE layer which is inspired by Hopfield-type neural networks. We prove that the proposed ODE layer has global asymptotic stability on the projected space, which implies the existence and uniqueness of its steady state. We further show that the proposed layer satisfies the local stability condition such that the output is Lipschitz continuous in the ODE layer input, guaranteeing that the norm of perturbation on the hidden state does not grow over time. By experiments we show that an appropriate level of stability constraints imposed on the proposed ODE layer can improve the adversarial robustness of ODE layers, and present a heuristic method for finding good hyperparameters for stability constraints.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call