Abstract
LDP (Local Differential Privacy) and its variants have been recently studied to analyze personal data collected from IoT (Internet of Things) devices while strongly protecting user privacy. In particular, a recent study proposes a general privacy notion called ID-LDP (Input-Discriminative LDP), which introduces a privacy budget for each input value to deal with different levels of sensitivity. However, it is unclear how to set an appropriate privacy budget for each input value, especially in current situations where re-identification is considered a major risk, e.g., in GDPR. Moreover, the possible number of input values can be very large in IoT. Consequently, it is also extremely difficult to manually check whether a privacy budget for each input value is appropriate. In this paper, we propose algorithms to automatically tune privacy budgets in ID-LDP so that obfuscated data strongly prevent re-identification. We also propose a new instance of ID-LDP called OneID-LDP (One-Budget Input-Discriminative LDP) to prevent re-identification with high utility. Through comprehensive experiments using four real datasets, we show that existing instances of ID-LDP lack either utility or privacy – they overprotect personal data or are vulnerable to re-identification attacks. Then we show that our OneID-LDP mechanisms with our privacy budget tuning algorithm provide much higher utility than LDP mechanisms while strongly preventing re-identification.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.