Abstract

In a decentralized Internet of Things (IoT) network, a fusion center receives information from multiple sensors to infer a public hypothesis of interest. To prevent the fusion center from abusing the sensor information, each sensor sanitizes its local observation using a local privacy mapping, which is designed to achieve both inference privacy of a private hypothesis and data privacy of the sensor raw observations. Various inference and data privacy metrics have been proposed in the literature. We introduce the concepts of privacy implication and non-guarantee to study the relationships between these privacy metrics. We propose an optimization framework in which both local differential privacy (data privacy) and information privacy (inference privacy) metrics are incorporated. In the parametric case where sensor observations' distributions are known \emph{a priori}, we propose a two-stage local privacy mapping at each sensor, and show that such an architecture is able to achieve information privacy and local differential privacy to within the predefined budgets. For the nonparametric case where sensor distributions are unknown, we adopt an empirical optimization approach. Simulation and experiment results demonstrate that our proposed approaches allow the fusion center to accurately infer the public hypothesis while protecting both inference and data privacy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.