Abstract

Secure and private communications using the Internet of Things (IoT) pose several challenges for smart home systems. In particular, data collected from IoT devices comprise sensitive personal information such as biomedical data, financial data, and location and activity data. Recent research looks into the use of blockchain in smart home systems, protecting the privacy of the data in use. Such solutions need to address the issue of privacy using a formal and mathematical model for data privacy due to the vulnerability associated with privacy-preserving blockchain networks. In the present paper, our approach aims to provide a privacy-preserving data aggregation mechanism in the context of Smart Homes that agree to contribute their data to a cloud server using machine learning to improve services for home users. We propose the use of differential privacy, a powerful concept in privacy preserving schemes to provide formal assurances about how much information is leaked using a privacy budget. The main purpose of using such a privacy-preserving scheme is to limit what can be inferred about individual training data from the model. Our techniques use a R′enyi differential privacy (RDP) machine learning scheme and are based on a variant of the stochastic gradient descent function. The performance of our proposed framework is evaluated using three public datasets: UNSW-NB15, NSL-KDD, and ToN-IoT datasets. Our findings show that differential private models can provide privacy protection against attackers by sacrificing a substantial amount of model utility. Therefore, we propose an empirical value of ϵ, that can optimally balances utility and privacy for the current smart home scenario datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call