Abstract

Differential privacy is a mathematical technique that provides strong theoretical privacy guarantees by ensuring the statistical indistinguishability of individuals in a dataset. It has become the de facto framework for providing privacy-preserving data analysis over statistical datasets. Differential privacy has garnered significant attention from researchers and privacy experts due to its strong privacy guarantees. However, the accuracy loss caused by the noise added has been an issue. First, we propose a new noise adding mechanism that preserves \((\epsilon ,\delta )\)-differential privacy. The distribution pertaining to this mechanism can be observed as a generalized truncated Laplacian distribution. We show that the proposed mechanism adds optimal noise in a global context, conditional upon technical lemmas. We also show that the generalized truncated Laplacian mechanism performs better than the optimal Gaussian mechanism. In addition, we also propose an \((\epsilon )\)-differentially private mechanism to improve the utility of differential privacy by fusing multiple Laplace distributions. We also derive the closed-form expressions for absolute expectation and variance of noise for the proposed mechanisms. Finally, we empirically evaluate the performance of the proposed mechanisms and show an increase in all utility measures considered, while preserving privacy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call