Abstract

In recent years, machine learning algorithms, or rather deep learning algorithms, have beenwidely used in many fields, including cybersecurity. However, machine learning systems are vulnerableto attacks by attackers, and this limits the use of machine learning, especially in nonstationaryenvironments with hostile actions, such as the cybersecurity field, where real attackersexist (for example, malware developers). With the rapid development of artificial intelligence (AI)and deep learning (GO) methods, it is important to ensure the safety and reliability of the implementedalgorithms. Recently, the vulnerability of deep learning algorithms to conflicting patternshas been widely recognized. Fabricated samples for analysis can lead to various violations of thebehavior of deep learning models, while people will consider them safe to use. The successful implementationof enemy attacks in real physical situations and scenarios of the real physical worldonce again proves their practicality. As a result, methods of adversarial attack and defense areattracting increasing attention from the security and machine learning communities and havebecome a hot topic of research in recent years not only in Russia, but also in other countries.Sberbank, Yandex, T1 Group, Atlas Medical Center and many others are developing competitivesolutions, including on the international market. Unfortunately, in the list of the 10 largest ITcompanies, the direction of Big Data, in particular, and protection against attacks is representedonly by the T1 Group company, but the market growth potential is huge. In this paper, the theoreticalfoundations, algorithms and application of methods of adversarial attacks of the enemy arepresented. Then a number of research papers on protection methods are described, covering awide range of research in this area. This article explores and summarizes adversarial attacks anddefenses, which represent the most up-to-date research in this field and meet the latest requirementsfor information security.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call