Abstract

When user-centric location privacy protections are used by the public (e.g., published as a mobile application to Mac app store), they are found to be vulnerable to a novel kind of potential inference attacks. These attacks can utilize the open-use protection’s input–output pairs to closely study the protection’s structure and mechanism. We name this kind of attacks as knowing-and-learning (KL) attacks. Targeted at such potential but threatening deep learning-based KL attacks in local search services, we propose a novel location privacy protection framework LPP2KL. LPP2KL aims to generate an obfuscated location, which is robust to the aforementioned attacks, so that the user’s current location (real) can be replaced by the generated location (pseudo) when submitted to the untrusted search server. To achieve this, we first depict a feasible net structure to implement the deep KL attack and verify its power with extensive experiments. Second, to simulate the attack-and-defense process, we develop a novel network structure with an adversary net and a protection net. These nets play a mini–max game toward privacy inference in an adversarial manner until an equilibrium is reached. Meanwhile, the protection net also achieves the balance between preserved privacy and quality of service through utility-constrained privacy optimization. We introduce a novel penalty function to handle the utility-constrained privacy optimization problem. Extensive experiments are conducted on Yelp and FoursSwarm datasets, which validate the generality of our proposed LPP2KL in protection handling and illustrate its effectiveness in utility control under the acceptable quality of service.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call