Abstract

Trams have increasingly deployed object detectors to perceive running conditions, and deep learning networks have been widely adopted by those detectors. Growing neural networks have incurred severe attacks such as adversarial example attacks, imposing threats to tram safety. Only if adversarial attacks are studied thoroughly, researchers can come up with better defence methods against them. However, most existing methods of generating adversarial examples have been devoted to classification, and none of them target tram environment perception systems. In this paper, we propose an improved projected gradient descent (PGD) algorithm and an improved Carlini and Wagner (C&W) algorithm to generate adversarial examples against Faster R-CNN object detectors. Experiments verify that both algorithms can successfully conduct nontargeted and targeted white-box digital attacks when trams are running. We also compare the performance of the two methods, including attack effects, similarity to clean images, and the generating time. The results show that both algorithms can generate adversarial examples within 220 seconds, a much shorter time, without decrease of the success rate.

Highlights

  • Trams have gained popularity because of low cost, high efficiency and environment friendliness [1, 2]

  • A tram perception system usually deploys detectors, such as cameras installed on carriages [4], and deep learning methods, such as convolutional neural networks (CNNs), to analyse visions captured by the cameras. e systems are used to detect obstacles and their locations beforehand and take corresponding measures

  • Given the common standard that a success confidence is no less than 50%, this result indicated that the nontargeted attack initiated by the improved projected gradient descent (PGD) method against the Faster R-CNN succeeded

Read more

Summary

Introduction

Trams have gained popularity because of low cost, high efficiency and environment friendliness [1, 2]. Wang et al [28] used PGD to produce adversarial examples on the total loss of the Faster R-CNN object detector, which achieves a high success rate and is applicable in numerous neural network architectures. Is approach is designated to tram environment perception systems to generate adversarial examples, which, to the best of our knowledge, is the first research to do so It can conduct both targeted and nontargeted attacks with less time and high success rates. E improved PGD algorithm proposed in this paper is a kind of white-box attack that only targets Faster R-CNNs. erefore, before the attack, all parameters of the Faster R-CNN are required in order to generate adversarial examples. The loss function is calculated after removing unnecessary region proposals in order to reach optimized convergence

Experiment
Results and Discussions
Method
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call