Abstract

<p>Artificial Intelligence (AI) has become ubiquitous, transforming numerous domains including traffic sign recognition, defect detection, and healthcare. However, this widespread adoption has brought about significant cybersecurity challenges, particularly in the form of backdoor attacks, which manipulate training datasets to compromise model integrity. While the integration of AI has proven beneficial, there is a lack of comprehensive strategies to protect AI models from these covert attacks, necessitating innovative approaches for securing AI systems. In this study, we demonstrate a novel methodology that integrates image steganography with deep learning techniques, aiming to obscure backdoor triggers and enhance the resilience of AI models against these attacks. We employ a diverse set of AI models and conduct extensive evaluations in a traffic sign recognition scenario, specifically targeting the STOP sign. The results reveal that shallow models are challenged in learning trigger information and are sensitive to trigger settings, while deeper models achieve an impressive 98.03% attack success rate. The image steganography technique used requires minimal data adjustments, making the triggers more challenging to detect than with traditional methods. Our findings underscore the stealth and severity of backdoor attacks, emphasizing the need for advanced security measures in AI and contributing to the broader understanding and development of robust protections against such attacks.</p> <p> </p>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call