Abstract

The rapid development of data sharing applications brings a serious problem of privacy disclosure. As an effective privacy-preserving method, the differential privacy, which strictly defines the privacy-preserving degree and data utility mathematically, can balance the privacy and data utility. However, the differential privacy has a hypothesis premise that the raw data are accurate without any error, so it could not limit the privacy security and the data utility to the expected range when processing data with errors. Hence, this paper focuses on the study on the influence of data errors on differential privacy. Taking the random error as an example, we analyze the influence mode and mechanism of data errors on differential privacy, especially on the privacy budget $$\varepsilon $$ . The theoretical derivations and experimental simulations prove that the Laplace mechanism still preserves $$\varepsilon ^{\prime }$$ -indistinguishability for data with errors. Moreover, the random algorithm can realize the expected privacy preserving strength by adding less noise compared with the algorithm that do not consider data errors, and has a better data utility by reducing the unnecessary cost of utility. This paper defines the research directions on the differential privacy theory concerning of data errors, and provides the foundations of perfecting the theory system and promoting the practicality of the differential privacy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.