Abstract

Incentive mechanism plays a critical role in privacy-aware crowdsensing. Most previous studies assume a trustworthy fusion center (FC) in their co-design of incentive mechanism and privacy preservation. Very recent work has taken the step to relax the assumption on trustworthy FC and allowed participatory users (PUs) to randomly report their binary sensing data, whereas the focus is to examine PUs' equilibrium behavior. Making a paradigm shift, this paper aims to study the privacy compensation for continuous data sensing while allowing FC to directly control PUs. There are two conflicting objectives in such a scenario: FC desires better quality data in order to achieve higher aggregation accuracy whereas PUs prefer injecting larger noises for higher privacy-preserving levels (PPLs). To strike a good balance therein, we propose an efficient incentive mechanism named REAP to reconcile FC's aggregation accuracy and individual PU's data privacy. Specifically, we adopt the celebrated notion of differential privacy to quantify PUs' PPLs and characterize their impacts on FC's aggregation accuracy. Then, appealing to contract theory, we design an incentive mechanism to maximize FC's aggregation accuracy under a given budget. The proposed incentive mechanism offers different contracts to PUs with different privacy preferences, by which FC can directly control them. It can further overcome the information asymmetry problem, i.e., FC typically does not know each PU's precise privacy preference. We derive closed-form solutions for the optimal contracts in both complete information and incomplete information scenarios. Further, the results are generalized to the continuous case where PUs' privacy preferences take values in a continuous domain. Extensive simulations are provided to validate the feasibility and advantages of our proposed incentive mechanism.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call