Abstract

Federated learning (FL) is a distributed framework for machine learning (ML) model training. Training agents upload local model parameters rather than original training data, and the central server performs parameter aggregation. FL can protect user data privacy and break the information island when training the ML model. Federated Average (FedAvg) is an aggregation method commonly used in the FL training task. The central server calculates the mean value of the local model parameters to obtain the new global parameters. FedAvg assumes that all the training agents are honest, which means the central server lacks terminal agents' knowability. When there are attackers in the training agents, the global model's performance may be deeply affected, and the training task cannot be completed normally. To solve this problem, we propose a Federated Learning method with aggregation based on the Importance of training agent (FedIM), in which the central server performs pre-evaluation on the agent parameters before aggregation, calculates the weights of parameters according to the historical behavior records of training terminals and performs federated aggregation to improve the anti-poisoning ability of learning task. Experiments show that our method can effectively improve the global model's anti-poisoning ability and accelerate the training speed compared with the FedAvg method when malicious agents are involved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call