Abstract

Differential privacy is a state-of-the-art technology for privacy preserving, and Laplace mechanism is a simple and powerful tool to realize differential privacy. However, there is an obvious flaw in differential privacy, which is each query function can only be executed finite times for the reason that adversary can recover the real query result if he executes the same query function many times. Unfortunately, how to set the upper bound for the number of linear queries is still an issue. In this paper, we focus on the linear query function in Laplace-based mechanisms, and we propose a method to set the upper bound for the number of linear queries from the perspective of information theory. The main idea is, firstly we find the most aggressive linear query that leaks the maximum information about the dataset to adversaries, and we set the upper bound of the number of queries so that even if the most aggressive linear query cannot leak the whole self-information about any individual to the adversary. On the other hand, the number of queries is also influenced by the type of dataset (continuous and discrete). In this paper, we also discuss the different upper bound for the number of queries for continuous datasets and discrete datasets. Finally, we conduct the experiments on the continuous dataset and the discrete dataset to prove our result.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.