Abstract

The increasing Internet-of-Things (IoT) devices have produced large volumes of data. A deep learning technique is widely used to analyze the potential value of these data due to its unprecedented performance in both the academic and industrial communities. However, the data generated from the IoT devices are distributed among different users. Directly combining these data to a central server will cause privacy leakage, especially for personal sensitive data. Rather than centralized training by getting access to all these raw data, an alternative is to collaboratively learn a model in a distributed manner. However, there exist two main challenges in a distributed learning setting. The first one is how to preserve the privacy of users. The second one is to reduce the communication burden (e.g., mobile users have limited bandwidth) due to high-frequent data exchange. To address these two challenges, we design a communication efficient and privacy-preserving framework to enable different participants to distributively learn a model with a privacy protection guarantee. In particular, we develop a differentially private approximate mechanism for the distributed deep learning. In addition, we design a new gradient sparsification method to, at the first time, reduce both upload and download communication costs. The performance of the proposed framework is tested under different neural network structures for different data sets including, image classification and mobile sensor data. The experimental results demonstrate that we can reduce the communication up to only 2% compared to the full gradients exchange and achieve up to 16% accuracy increase compared to the previous works.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call