Abstract
The rapid development of deep learning (DL) models has been accompanied by various safety and security challenges, such as adversarial attacks and backdoor attacks. By analyzing the current literature on attacks and defenses in DL, we find that the ongoing adaptation between attack and defense makes it impossible to completely resolve these issues. In this paper, we propose that this situation is caused by the inherent flaws of DL models, namely non-interpretability, non-recognizability, and non-identifiability. We refer to these issues as the Endogenous Safety and Security (ESS) problems. To mitigate the ESS problems in DL, we propose using the Dynamic Heterogeneous Redundant (DHR) architecture. We believe that introducing diversity is crucial for resolving the ESS problems. To validate the effectiveness of this approach, we conduct various case studies across multiple application domains of DL. Our experimental results confirm that constructing DL systems based on the DHR architecture is more effective than existing DL defense strategies.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have