Abstract

Vehicles have increasingly deployed object detectors to perceive running conditions, and deep learning networks have been widely adopted by those detectors. Growing neural networks have incurred severe attacks, like adversarial attacks, imposing threats to vehicle safety. Only if adversarial attacks are studied thoroughly can researchers think of better defence measures against them. However, most existing methods of generating an adversarial sample have focused on classification. Plus, stop signs in English have been popular objects to perform adversarial attacks on while whether those in Chinese are likely to be attacked still remains a problem. In this paper, we proposed an improved ShapeShifter method to generate adversarial examples against Faster Region-Convolutional neural networks (Faster R-CNN) object detectors by adding white Gaussian noise to the optimization function of ShapeShifter. Experiments verify that the improved ShapeShifter method can successfully and effectively attack Faster R-CNNs for stop signs both in English and Chinese, which is much better than ShapeShifter under certain circumstances. Plus, it has better robustness and can overcome ShapeShifter's drawback of high requirements on photographic equipment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call