Abstract

Classical federated learning methods aggregate decentralized data from different devices into a central location for efficient training. However, these approaches raise significant concerns regarding data privacy and requires extensive communication between edge devices and the cloud, leading to high communication costs. To conquer these challenges, we propose an edge-cloud privacy-preserving model training system that enables the cloud and edge devices to collaboratively perform deep neural network(DNN) training, namely MistNet, which has the following advantages. (1) Different from previous works that required frequent communication between the edge and cloud, MistNet requires only one-shot communication. (2) MistNet introduces the rigorous local differential privacy (LDP) technique to the intermediate feature maps, guaranteeing that the user’s local data and model parameters remain undisclosed. (3) In MistNet, the feature extractor is transferred from pre-trained models that are designed for similar application domains. This feature extractor remains fixed during training, thereby eliminating the requirement to synchronize feature extractors across different devices. Furthermore, we first design an object detection algorithm based on Yolov5 with the MistNet framework. Finally, the extensive experiments results on multiple models and datasets demonstrate that by choosing an appropriate partition layer and privacy budget, MistNet achieves lower communication and faster convergence, as well as acceptable model utility while greatly reducing privacy leakage from the released intermediate features. Our framework is primarily designed for image data and has already been applied to Plato. You can find the source code on GitHub page at https://github.com/TL-System/plato.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call