Federated learning is a distributed machine learning method where clients train models on local data to ensure that data will not be transmitted to a central server, providing unique advantages in privacy protection. However, in real-world scenarios, data between different clients may be non-Independently and Identically Distributed (non-IID) and imbalanced, leading to discrepancies among local models and impacting the efficacy of global model aggregation. To tackle this issue, this paper proposes a novel framework, FedARF, designed to improve Federated Learning performance by adaptively reconstructing local features during training. FedARF offers a simple reconstruction module for aligning feature representations from various clients, thereby enhancing the generalization capability of cross-client aggregated models. Additionally, to better adapt the model to each client’s data distribution, FedARF employs an adaptive feature fusion strategy for a more effective blending of global and local model information, augmenting the model’s accuracy and generalization performance. Experimental results demonstrate that our proposed Federated Learning method significantly outperforms existing methods in variety image classification tasks, achieving faster model convergence and superior performance when dealing with non-IID data distributions.
Read full abstract