Abstract

Wireless sensor networks have been widely used to achieve fine-grained information collection. However, numerous data acquisition and processing of sensors bring some privacy issues. Federated learning is a promising and privacy-friendly framework that trains a model across multiple devices or edge nodes holding local data samples without transferring their data to the server. It is not enough to protect privacy only by maintaining data locality, so differential privacy technology is often used to protect privacy in federated learning. However, different users have different privacy requirements, so it is inappropriate to use the same privacy protection scheme, assuming that all users trust or distrust the server. The former has poor accuracy, while the latter has poor privacy. This paper proposes a secure and reliable federated learning algorithm by integrating hybrid differential privacy into federated learning. We divide users into two categories according to their different privacy needs. In addition, we analyze the convergence and privacy bounds of the proposed algorithm and propose an adaptive gradient clip scheme and improved composition method to reduce the effects of noise and clip, respectively. The validity of the algorithm is verified by theoretical analysis and experimental evaluation on real-world datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.