Abstract
Deep learning-based service has received great success in many fields and changed our daily lives profoundly. To support such service, the provider needs to continually collect data from users and protect users’ privacy at the same time. Adversarial deep learning is of widespread interest to service providers because of its ability to automatically select privacy-preserving features that have less impact on the Quality of Service (QoS). However, choosing an appropriate threshold to adjust the weight of the QoS and privacy-preserving becomes a significant issue for both the provider and users. In this paper, we model the contradicting incentives between the QoS and privacy-preserving as an evolutionary game, and achieve an Evolutionary Stable Strategy (ESS) to help users decide whether to submit high-quality data or not. First, we define the individual contribution to the QoS and the privacy cost of submitting high-quality data. Then, we propose an incentive mechanism to deal with the problems that the users are bounded rational and do not own the complete knowledge about other users’ choices. Moreover, we propose an ESS-based algorithm of balancing the QoS and privacy risk, which reaches a stable state of maintaining long-term service by multiple iterations. Finally, we conduct the simulation experiments to demonstrate that our strategy can efficiently incentivize users to make a trade-off between the long-term benefits of the QoS and the current cost of privacy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.