Abstract

An increasing amount of users' locations are aggregated, and the statistical results about the collected data are further released to support mobile applications, such as point-of-interest recommendation and smart transportation. However, such statistical results cause users' membership privacy leakage. Unfortunately, most studies concerning data aggregation focused on privacy preservation and various attacks rather than the membership inference attacks. Moreover, literature about membership inference attacks mainly aimed at machine learning models and gene sequences rather than the locations in data aggregation. More importantly, these work concerning membership inference attacks assumed that adversaries know the exact data of victims, which is always impossible in practical scenarios. To this end, we propose LocMIA, a more invasive attack system that allows adversaries to launch membership inference attacks against aggregated location data without reliance on any prior knowledge of the locations of victims. The main idea of LocMIA is to train a binary classifier to infer whether a specific victim's location data is involved in the aggregation group, based solely on the data aggregation's output (i.e., the statistical results). Finally, experimental results on a real-world check-in data set prove the devastating privacy leaks caused by the proposed LocMIA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call