Abstract

We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees, that is, a notion of measurable trustworthiness, in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network’s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model’s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call