Federated learning (FL) has been an effective way to train a machine learning model distributedly, holding local data without exchanging them. However, due to the inaccessibility of local data, FL with label noise would be more challenging. Most existing methods assume only open-set or closed-set noise and correspondingly propose filtering or correction solutions, ignoring that label noise can be mixed in real-world scenarios. In this article, we propose a novel FL method to discriminate the type of noise and make the FL mixed noise-robust, named FedMIN. FedMIN employs a composite framework that captures local-global differences in multiparticipant distributions to model generalized noise patterns. By determining adaptive thresholds for identifying mixed label noise in each client and assigning appropriate weights during model aggregation, FedMIN enhances the performance of the global model. Furthermore, FedMIN incorporates a loss alignment mechanism using local and global Gaussian mixture models (GMMs) to mitigate the risk of revealing samplewise loss. Extensive experiments are conducted on several public datasets, which include the simulated FL testbeds, i.e., CIFAR-10, CIFAR-100, and SVHN, and the real-world ones, i.e., Camelyon17 and multiorgan nuclei challenge (MoNuSAC). Compared to FL benchmarks, FedMIN improves model accuracy by up to 9.9% due to its superior noise estimation capabilities.
Read full abstract