The federated learning (FL) paradigm aims to distribute the computational burden of the training process among several computation units, usually called agents or workers, while preserving private local training datasets. This is generally achieved by resorting to a server-worker architecture where agents iteratively update local models and communicate local parameters to a server that aggregates and returns them to the agents. However, the presence of adversarial agents, which may intentionally exchange malicious parameters or may have corrupted local datasets, can jeopardize the FL process. Therefore, we propose selective trimmed average (SETA), which is a resilient algorithm to cope with the undesirable effects of a number of misbehaving agents in the global model. SETA is based on properly filtering and combining the exchanged parameters. We mathematically prove that the proposed algorithm is resilient against data and local model poisoning attacks. Most resilient methods presented so far in the literature assume that a trusted server is in hand. In contrast, our algorithm works both in server-worker and shared memory architectures, where the latter excludes the necessity of a trusted server. The theoretical findings are corroborated through numerical results on MNIST dataset and on multiclass weather dataset (MWD).
Read full abstract