Abstract
Artificial intelligence (AI) and deep learning algorithms are advancing rapidly, with these emerging technologies being widely applied in areas such as audio-visual recognition and natural language processing. However, in recent years, researchers have identified several security risks in current mainstream AI models, which could hinder the further development of AI technologies. As a result, the issues of data security and privacy protection in AI models have become a focus of research. The data and privacy leakage problems are primarily studied from two perspectives: data leakage based on model outputs and data leakage based on model updates. In the context of model output-based data leakage, the study discusses the principles and research status of model theft attacks, model inversion attacks, and membership inference attacks. In the context of model update-based data leakage, the research focuses on how attackers can steal private data during the distributed training process. Regarding data and privacy protection, three common defense methods are primarily studied: model structure defenses, information obfuscation defenses, and query control defenses. This paper reviews the cutting-edge research achievements in the field of data security and privacy protection in AI deep learning models, focusing on the theoretical foundations, key findings, and related applications of data theft and defense technologies in AI deep learning models. Keywords: Artificial Intelligence, Data Security, Privacy Leakage, Privacy Protection
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have