Abstract
Adversarial Machine Learning (AML) has emerged as a critical area of research within network security, addressing the evolving challenge of adversaries exploiting machine learning (ML) models. This systematic review adopts the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology to comprehensively examine threat vectors and defense mechanisms in AML. The study identifies, categorizes, and evaluates existing research focused on adversarial attacks targeting ML algorithms in network security applications, including evasion, poisoning, and model extraction attacks. By rigorously following the PRISMA guidelines, a systematic search across multiple scholarly databases yielded a robust dataset of peer-reviewed articles that were screened, reviewed, and analyzed for inclusion. The review outlines key adversarial techniques employed against ML systems, such as gradient-based attack strategies and black-box attacks and explores the underlying vulnerabilities in network security architectures. Additionally, it highlights defense mechanisms, including adversarial training, input preprocessing, and robust model design, discussing their efficacy and limitations in mitigating adversarial threats. The study also identifies critical gaps in current research, such as the lack of standardized benchmarking for adversarial defenses and the need for scalable and real-time AML solutions.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.