Abstract

The merging of AI and CS has opened new possibilities for both disciplines [AI]. When used to the creation of intelligent models for malware categorization, intrusion detection, and threat intelligence sensing, deep learning is only one example of the many types of AI that have found a home in cyber security. The integrity of data used in AI models is vulnerable to corruption, which increases the risk of a cyberattack. Specialized cyber security defence and protection solutions are necessary to safeguard AI models from dangers such as adversarial machine learning [ML], keeping ML privacy, preserving federated learning, etc. These two tenets form the basis of our investigation into the effects of AI on network safety. In the first part of this essay, we take a high-level look at the present state of research into the use of AI in the prevention and mitigation of cyber-attacks, including both the use of more conventional machine learning techniques and existent deep learning solutions. We then examine possible AI countermeasures and divide them into distinct defence classes according to their characteristics. To wrap up, we expand on previous studies on the topic of building a safe AI system, paying special attention to the creation of encrypted neural networks and the actualization of secure federated deep learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call