Abstract

We study a statistical signal processing privacy problem, where an agent observes useful data <inline-formula> <tex-math notation="LaTeX">$Y$ </tex-math></inline-formula> and wants to reveal the information to a user. Since the useful data is correlated with the private data <inline-formula> <tex-math notation="LaTeX">$X$ </tex-math></inline-formula>, the agent employs a privacy mechanism to generate data <inline-formula> <tex-math notation="LaTeX">$U$ </tex-math></inline-formula> that can be released. We study the privacy mechanism design that maximizes the revealed information about <inline-formula> <tex-math notation="LaTeX">$Y$ </tex-math></inline-formula> while satisfying a strong <inline-formula> <tex-math notation="LaTeX">$\ell _{1}$ </tex-math></inline-formula>-privacy criterion. When a sufficiently small leakage is allowed, we show that the optimizer distributions of the privacy mechanism design problem have a specific geometry, i.e., they are perturbations of fixed vector distributions. This geometrical structure allows us to use a local approximation of the conditional entropy. By using this approximation the original optimization problem can be reduced to a linear program so that an approximate solution for the optimal privacy mechanism can be easily obtained. The main contribution of this work is to consider a non-invertible leakage matrix with non-zero leakage. In our first example, inspired by a watermark application, we first demonstrate the accuracy of the approximation. Then, we employ different measures for utility and privacy leakage to compare the privacy-utility trade-off using our approach with other methods. In particular, we show that by allowing small leakage, significant utility can be achieved using our method compared to the case where no leakage is allowed. In the second and third examples which are based on the MNIST data set and medical applications, we illustrate the suggested design for disclosed data <inline-formula> <tex-math notation="LaTeX">$U$ </tex-math></inline-formula>. It has been shown that the letters of <inline-formula> <tex-math notation="LaTeX">$Y$ </tex-math></inline-formula> which are disclosing more information about <inline-formula> <tex-math notation="LaTeX">$X$ </tex-math></inline-formula> are combined (randomized) to produce a new letter of <inline-formula> <tex-math notation="LaTeX">$U$ </tex-math></inline-formula>.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.