Abstract

Privacy against an adversary (AD) that tries to detect the underlying privacy-sensitive data distribution is studied. The original data sequence is assumed to come from one of the two known distributions, and the privacy leakage is measured by the probability of error of the binary hypothesis test carried out by the AD. A management unit (MU) is allowed to manipulate the original data sequence in an online fashion while satisfying an average distortion constraint. The goal of the MU is to maximize the minimal type II probability of error subject to a constraint on the type I probability of error assuming an adversarial Neyman–Pearson test, or to maximize the minimal error probability assuming an adversarial Bayesian test. The asymptotic exponents of the maximum minimal type II probability of error and the maximum minimal error probability are shown to be characterized by a Kullback–Leibler divergence rate and a Chernoff information rate, respectively. Privacy performances of particular management policies, the memoryless hypothesis-aware policy and the hypothesis-unaware policy with memory, are compared. The proposed formulation can also model adversarial example generation with minimal data manipulation to fool classifiers. At last, the results are applied to a smart meter privacy problem, where the user’s energy consumption is manipulated by adaptively using a renewable energy source in order to hide user’s activity from the energy provider.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.