Abstract

In recent years, with rapid technological advancement in both computing hardware and algorithm, Artificial Intelligence (AI) has demonstrated significant advantage over human being in a wide range of fields, such as image recognition, education, autonomous vehicles, finance, and medical diagnosis. However, AI-based systems are generally vulnerable to various security threats throughout the whole process, ranging from the initial data collection and preparation to the training, inference, and final deployment. In an AI-based system, the data collection and pre-processing phase are vulnerable to sensor spoofing attacks and scaling attacks, respectively, while the training and inference phases of the model are subject to poisoning attacks and adversarial attacks, respectively. To address these severe security threats against the AI-based systems, in this article, we review the challenges and recent research advances for security issues in AI, so as to depict an overall blueprint for AI security. More specifically, we first take the lifecycle of an AI-based system as a guide to introduce the security threats that emerge at each stage, which is followed by a detailed summary for corresponding countermeasures. Finally, some of the future challenges and opportunities for the security issues in AI will also be discussed.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.