Decentralized machine learning, such as Federated Learning (FL), is widely adopted in many application domains. Especially in domains like recommendation systems, sharing gradients instead of private data has recently caught the research community’s attention. Personalized travel route recommendation utilizes users’ location data to recommend optimal travel routes. Location data is extremely privacy sensitive, presenting increased risks of exposing behavioural patterns and demographic attributes. FL for route recommendation can mitigate the sharing of location data. However, this paper shows that an adversary can recover the user trajectories used to train the federated recommendation models with high proximity accuracy. To this effect, we propose a novel attack called DeepSneak, which uses shared gradients obtained from global model training in FL to reconstruct private user trajectories. We formulate the attack as a regression problem and train a generative model by minimizing the distance between gradients. We validate the success of DeepSneak on two real-world trajectory datasets. The results show that we can recover the location trajectories of users with reasonable spatial and semantic accuracy.
Read full abstract