Abstract

As data collection increases, more and more sensitive data is being used to publish query results. This creates a significant risk of privacy disclosure. As a mathematically provable privacy theory, differential privacy (DP) provides a tool to resist background knowledge attacks. Fuzzy differential privacy (FDP) generalizes differential privacy by employing smaller sensitivity and supporting multiple similarity measures. Thus the output error can be reduced under FDP. Existing FDP mechanisms employ sliding window strategy, which perturb the true query value to an interval with this value as the midpoint to maintain similarity of outputs from neighboring datasets. It is still possible for an attacker to infer some sensitive information based on the difference between the left and right endpoints of the output range. To address this issue, this article present two solutions: fixed interval perturbation and infinite interval perturbation. These strategies perturb the true query values of two neighboring datasets to the same interval and provide fuzzy differential privacy protection for the dataset. We apply the proposed method to the privacy-preserving problem of bipartite graph subgraph counting and verify the effectiveness by experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call