Abstract

Vehicular CrowdSensing (VCS) network is one of the key scenarios for future 6G ubiquitous artificial intelligence. In a VCS network, vehicles are recruited for collecting urban data and performing deep model inference. Due to the limited computing power of vehicles, we deploy a device-edge co-inference paradigm to improve the inference efficiency in the VCS network. Specifically, the vehicular device and the edge server keep a part of the deep model separately, but work together to perform the inference through sharing intermediate results. Although vehicles keep the raw data locally, privacy issues still exist once attackers obtain the shared intermediate results and recover the raw data in some way. In this paper, we validate the possibility by conducting a systematic study on the privacy attack and defense in the co-inference of VCS network. The main contributions are threefold: (1) We take the road sign classification task as an example to demonstrate how an attacker reconstructs the raw data without any knowledge of deep models. (2) We propose a model-perturbation defense to defend against such attacks by injecting some random Laplace noise into the deep model. A theoretical analysis is given to show that the proposed defense mechanism achieves epsilon-differential privacy. (3) We further propose a Stackelberg game-based incentive mechanism to attract the vehicles to participate in the co-inference by compensating their privacy loss in a satisfactory way. The simulation results show that our proposed defense mechanism can significantly reduce the effects of the attacks and the proposed incentive mechanism is very effective.

Highlights

  • With the development of the Internet of Vehicle (IoV), more and more vehicle-to-everything (V2X) communication technologies emerge, such as IEEE-based dedicated shortrange communication (DSRC) technologies and 3GPP-based LTE technologies [1, 2].These technologies support stable wireless communication between vehicles and roadside infrastructures [3, 4]

  • The simulation results show that our proposed defense mechanism can significantly reduce the effects of the attacks and the proposed incentive mechanism is very effective

  • 6 Results and discussion we conduct the experiments to evaluate the performance of the blackbox reconstruction attack and the proposed model perturbation defense mechanism

Read more

Summary

Introduction

With the development of the Internet of Vehicle (IoV), more and more vehicle-to-everything (V2X) communication technologies emerge, such as IEEE-based dedicated shortrange communication (DSRC) technologies and 3GPP-based LTE technologies [1, 2]. We use a black box reconstruction attack, which is able to recover the input raw data only based on the intermediate output, to validate the privacy vulnerability of the co-inference. The vehicular devices and edge servers work together to improve the efficiency of deep model inference and reduce the communication costs in VCS networks. We adopt a black-box reconstruction attack to recover the input image in the road sign classification task This demonstrates the privacy vulnerability of the co-inference paradigm, which limits its deployment in VCS networks. We consider a black-box setting that the attacker doesn’t know the structure and parameters of the deep model fθ1 It could query the model, i.e., use arbitrary data X as input to run the model and observe the intermediate outputs V = fθ1(X). Ǫ f is the global sensitivity indicating that the maximum difference between the outputs ||f (X) − f (X′)||1 with any pair of inputs X and X′

Shields: model perturbation defense
Results and discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call