Abstract

Adversarial attacks pose serious traffic safety risks that could potentially impede the deployment of automated driving systems (ADS). Recent discoveries show that imperceptible perturbations (also referred to as adversarial attacks) to inputs of deep neural networks employed in these systems could impair their image recognition and prediction capabilities. Adversarial attacks can misguide advanced driving assistance systems (ADAS) to misclassify a traffic sign (e.g., classify a stop sign as a speed limit sign), causing the vehicle to accelerate instead of stop, which may potentially result in catastrophic incidents. This paper investigates encrypted invisible security patches placed on traffic signs and an onboard machine-vision system that can be integrated into the ADAS (i.e., ADS and cooperative adaptive cruise control systems). The security patches are essentially a visual representation of encrypted hash values generated via a one-way cryptographic algorithm. In this way, every traffic sign will be associated with a unique security pattern that can only be recognized by the proposed machine-vision technology. Additionally, these patches will be made invisible with the help of ultraviolet reflective sign technology. The proposed methodology aims to bring an additional security layer to sign recognition to enhance the reliability of ADAS. If a traffic sign is physically altered to attempt an adversarial threat, the security patch on the sign will not match with the sign information, and the driver will receive a warning to take control of the driving. For self-driving vehicles, a signal will be sent to the autonomous vehicle to enforce a safe driving action.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call