Abstract

The critical importance of user privacy in the context of Machine Learning is a hot research topic because it hinders data collection. Consequently, Federated Learning (FL) has emerged as a solution to the privacy problem. Instead of collecting users’ data to train smart models, FL exchanges models with clients, which are trained and sent back to the server for aggregation and global model updating. However, FL still faces some hurdles, such as vulnerability to inference and poisoning attacks. For this reason, this paper proposes PolyFLAG_SVM: Polymorphic Federated Learning Aggregation of Gradients Support Vector Machines Framework. PolyFLAG_SVM is a novel, secure, communication-efficient framework that provides several variants of Support Vector Machines models that follow the Gradient Descent Update technique. The confidence in the security of the proposed model is the result of the polymorphism of the encryption keys used, which guarantees that a cracked or leaked key is useless, since it is not used twice within the FL cycle. Moreover, the proposed framework is communication efficient due to the small size of messages exchanged between servers and clients. The proposed model is explained in detail and evaluated appropriately in this paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call