Abstract

In recent years, to surmount the challenges of data sensitivity and data silos, federated learning (FL), a collaborative decentralized privacy preservation machine learning framework was proposed and rapidly became a current research hotspot. In this paper, we present an overview of FL workflow, as well as the communication overhead and privacy leakage problems associated with FL. In order to alleviate these two pain points, several corresponding solutions are presented in this paper along with their advantages and disadvantages. To reduce the communication overhead, this paper suggests three reliable approaches by reviewing related research including increasing the amount of computing on the edge side, model compression, and improving parallelism. To address the privacy concerns, this paper introduces three prevailing privacy preservation techniques: differential privacy, homomorphic encryption, and secure multiparty computation (SMC), and shows how to efficiently incorporate these privacy-preserving techniques into a federated learning framework by summarizing relevant literature. On the whole, this paper contributes to explaining the overall workflow of a FL framework and helps advance the understanding of FL in improving communication efficiency and enhancing privacy security.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.