The integration of Deep Learning (DL) algorithms in Autonomous Vehicles (AVs) has revolutionized their precision in navigating various driving scenarios, ranging from anti-fatigue safe driving to intelligent route planning. Despite their proven effectiveness, concerns regarding the safety and reliability of DL algorithms in AVs have emerged, particularly in light of the escalating threat of adversarial attacks, as emphasized by recent research. These digital or physical attacks present formidable challenges to AV safety, relying extensively on collecting and interpreting environmental data through integrated sensors and DL. This paper addresses this pressing issue through a systematic survey that meticulously explores robust adversarial attacks and defenses, specifically focusing on DL in AVs from a safety perspective. Going beyond a review of existing research papers on adversarial attacks and defenses, the paper introduces a safety scenarios taxonomy matrix Inspired by SOTIF designed to augment the safety of DL in AVs. This matrix categorizes safety scenarios into four distinct areas and classifies attacks into those areas in three scenarios, along with two defense scenarios. Furthermore, the paper investigates the testing and evaluation measurements critical for assessing attacks in the context of DL for AVs. It further explores the dynamic landscape of datasets and simulation platforms. This contribution significantly enriches the ongoing discourse surrounding the assurance of safety and reliability in autonomous vehicles, especially in the face of continually evolving adversarial challenges.
Read full abstract