Abstract

The federated learning scheme enhances the privacy preservation through avoiding the private data uploading in cloud-edge computing. However, the attacks against the uploaded model updates still cause private data leakage which demotivates the privacy-sensitive participating edge devices. Facing this issue, we aim to design a privacy-preserving incentive mechanism for the federated cloud-edge learning (PFCEL) system such that 1) the edge devices are motivated to actively contribute to the updated model uploading, 2) a trade-off between the private data leakage and the model accuracy is achieved. We formulate the incentive design problem as a three-layer Stackelberg game, where the server-device interaction is further formulated as a contract design problem. Extensive numerical evaluations demonstrate the effectiveness of our designed mechanism in terms of privacy preservation and system utility.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.