Abstract

In recent years, the number of vehicles in the city is increasing. However, the increase in the number of cars caused a series of traffic problems, such as road congestion, traffic accidents, environmental pollution, and so on. To handle the problems, Internet of Vehicles (IoV) is emerging. As we know, deep learning has acquired significant success in many fields, and it is also applied in IoV. However, some studies show that deep learning is vulnerable to the crafted samples formed by adding small perturbations to original samples. Thus, the vulnerability of deep learning may pose a huge security threat to IoV. To ensure the security of deep learning, we conduct experiments to investigate whether there are adversarial samples in the IoV field. We try to generate adversarial examples by GPS data. The models we attack are trajectory mode detection models. Since both GPS trajectory data and image data are continuous data, we adopt the algorithm of Computer Vision to generate adversarial examples. We utilize white-box attack algorithms (Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM) algorithms) and black-box attack algorithm (One Pixel Attack) to generate adversarial examples. We adopt Dynamic Time Warping (DTW) to measure the similarity between adversarial examples and original trajectory data. Experimental results show that a small perturbation can successfully fool deep neural networks with high confidence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call