Abstract

Deep learning (DL) models are enabling a significant paradigm shift in a diverse range of fields, including natural language processing, computer vision, as well as the design and automation of complex integrated circuits. While the deep models – and optimizations-based on them, e.g., Deep Reinforcement Learning (RL) – demonstrate a superior performance and a great capability for automated representation learning, earlier works have revealed the vulnerability of DLs to various attacks. The vulnerabilities include adversarial samples, model poisoning, and fault injection attacks. On the one hand, these security threats could divert the behavior of the DL model and lead to incorrect decisions in critical tasks. On the other hand, the susceptibility of DLs to potential attacks might thwart trustworthy technology transfer as well as reliable DL deployment. In this work, we investigate the existing defense techniques to protect DLs against the above-mentioned security threats. Particularly, we review end-to-end defense schemes for robust deep learning in both centralized and federated learning settings. Our comprehensive taxonomy and horizontal comparisons reveal an important fact that defense strategies developed using DL/software/hardware co-design outperform the DL/software-only counterparts and show how they can achieve very efficient and latency-optimized defenses for real-world applications. We believe our systemization of knowledge sheds light on the promising performance of hardware-software co-design of DL security methodologies and can guide the development of future defenses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call