Abstract

Over the past decade, we have witnessed unprecedented development in deep learning (DL) and its contributions to modern networking systems. Along with its wide adoption, however, are growing concerns over the broad attack surfaces toward learning systems and the intrinsic vulnerabilities on privacy, security, robustness, and more. As a countermeasure to mitigate the threats or formalize a better defense, a widely adopted approach is to introduce a certain level of random perturbation (a.k.a. calibrated artificial noise) at either the training or prediction phase. Noteworthy examples include effective defenses against model inference attacks and notions of certified robustness. As such, differential privacy (DP), originally established as a privacy-preserving framework for data publishing, has drawn great interest from the learning community. Given a target utility and the acceptable trade-off, DP's formalization on the amount of noise needed has been shown to be widely applicable to a broad range of DL vulnerability mitigations. In this article, we present to our readers the recent representative advancements intersecting DL and DP, ranging from privacy enhancements for DL systems to security and robustness improvements and other novel extensions. Furthermore, we discuss the ongoing challenges and propose a number of future directions where DP has great potential to positively contribute to future DL systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call