Abstract

Federated learning, as a typical distributed learning paradigm, shows great potential in Industrial Internet of Things, Smart Home, Smart City, etc. It enables collaborative learning without data leaving local users. Despite the huge benefits, it still faces the risk of privacy breaches and a single point of failure for aggregation server. Adversaries can use intermediate models to infer user privacy, or even return incorrect global model by manipulating the aggregation server. To address these issues, several federated learning solutions focusing on privacy-preserving and security have been proposed. However, theses solutions still faces challenges in resource-limited scenarios. In this paper, we propose G-VCFL, a grouped verifiable chained privacy-preserving federated learning scheme. Specifically, we first use the grouped chain learning mechanism to guarantee the privacy of users, and then propose a verifiable secure aggregation protocol to guarantee the verifiability of the global model. G-VCFL does not require any complex cryptographic primitives and does not introduce noise, but enables verifiable privacy-preserving federated learning by utilizing lightweight pseudorandom generators. We conduct extensive experiments on real-world datasets by comparing G-VCFL with other state-of-the-art approaches. The experimental results and functional evaluation indicate that G-VCFL is efficient in the six experimental cases and satisfies all the intended design goals.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.