Abstract
Vehicles have increasingly deployed object detectors to perceive running conditions, and deep learning networks have been widely adopted by those detectors. Growing neural networks have incurred severe attacks, like adversarial attacks, imposing threats to vehicle safety. Only if adversarial attacks are studied thoroughly can researchers think of better defence measures against them. However, most existing methods of generating an adversarial sample have focused on classification. Plus, stop signs in English have been popular objects to perform adversarial attacks on while whether those in Chinese are likely to be attacked still remains a problem. In this paper, we proposed an improved ShapeShifter method to generate adversarial examples against Faster Region-Convolutional neural networks (Faster R-CNN) object detectors by adding white Gaussian noise to the optimization function of ShapeShifter. Experiments verify that the improved ShapeShifter method can successfully and effectively attack Faster R-CNNs for stop signs both in English and Chinese, which is much better than ShapeShifter under certain circumstances. Plus, it has better robustness and can overcome ShapeShifter's drawback of high requirements on photographic equipment.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.