Abstract

Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules.

Highlights

  • The emergence of smartphones that are equipped with GPS receivers has made location-based services (LBS) increasingly popular

  • We proposed a novel on-line spatialtemporal k-anonymity method (NOSTK) designed to resist inference attacks based on privacysensitive knowledge mined from large-scale anonymity datasets generated by LBS continuous queries

  • All the location privacy protection methods mentioned above, whether it is a weak privacy mode or more robust privacy model, there is a common problem: they function at the data level, so they cannot deal with inference attacks based on privacy-sensitive knowledge mined from large scale anonymity datasets ( [34] takes into account geographic location patterns, it only ensures the availability of interesting patterns and does not consider how to deal with attacks based on sensitive patterns)

Read more

Summary

Introduction

The emergence of smartphones that are equipped with GPS receivers has made location-based services (LBS) increasingly popular. Because spatial-temporal k-anonymity and its series of optimization variants exist at the data-level, there are no countermeasures to prevent attacks based on privacysensitive knowledge mined from large-scale anonymity datasets. Knowledge hiding techniques cannot deal with the dynamic nature of LBS applications, which requires online privacy preservation techniques that differ from off-line techniques. To overcome these challenges, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules. We proposed a novel on-line spatialtemporal k-anonymity method (NOSTK) designed to resist inference attacks based on privacysensitive knowledge mined from large-scale anonymity datasets generated by LBS continuous queries.

Related work
Background knowledge
Findings
Experiments and discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call