Abstract

It is known that various types of location privacy attacks can be carried out using a personalized transition matrix that is learned for each target user, or a population transition matrix that is common to all target users. However, since many users disclose only a small amount of location information in their daily lives, the training data can be extremely sparse. The aim of this paper is to clarify the risk of location privacy attacks in this realistic situation. To achieve this aim, we propose a learning method that uses tensor factorization (or matrix factorization) to accurately estimate personalized transition matrices (or a population transition matrix) from a small amount of training data. To avoid the difficulty in directly factorizing the personalized transition matrices (or population transition matrix), our learning method first factorizes a transition count tensor (or matrix), whose elements are the number of transition counts that the user has made, and then normalizes counts to probabilities. We focus on a localization attack, which derives an actual location of a user at a given time instant from an obfuscated trace, and compare our learning method with the maximum likelihood (ML) estimation method in both the personalized matrix mode and the population matrix mode. The experimental results using four real data sets show that the ML estimation method performs only as well as a random guess in many cases, while our learning method significantly outperforms the ML estimation method in all of the four data sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call