Abstract

Decentralized machine learning, such as Federated Learning (FL), is widely adopted in many application domains. Especially in domains like recommendation systems, sharing gradients instead of private data has recently caught the research community’s attention. Personalized travel route recommendation utilizes users’ location data to recommend optimal travel routes. Location data is extremely privacy sensitive, presenting increased risks of exposing behavioural patterns and demographic attributes. FL for route recommendation can mitigate the sharing of location data. However, this paper shows that an adversary can recover the user trajectories used to train the federated recommendation models with high proximity accuracy. To this effect, we propose a novel attack called DeepSneak, which uses shared gradients obtained from global model training in FL to reconstruct private user trajectories. We formulate the attack as a regression problem and train a generative model by minimizing the distance between gradients. We validate the success of DeepSneak on two real-world trajectory datasets. The results show that we can recover the location trajectories of users with reasonable spatial and semantic accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.