Abstract

Federated learning is a distributed machine learning method where clients train models on local data to ensure that data will not be transmitted to a central server, providing unique advantages in privacy protection. However, in real-world scenarios, data between different clients may be non-Independently and Identically Distributed (non-IID) and imbalanced, leading to discrepancies among local models and impacting the efficacy of global model aggregation. To tackle this issue, this paper proposes a novel framework, FedARF, designed to improve Federated Learning performance by adaptively reconstructing local features during training. FedARF offers a simple reconstruction module for aligning feature representations from various clients, thereby enhancing the generalization capability of cross-client aggregated models. Additionally, to better adapt the model to each client’s data distribution, FedARF employs an adaptive feature fusion strategy for a more effective blending of global and local model information, augmenting the model’s accuracy and generalization performance. Experimental results demonstrate that our proposed Federated Learning method significantly outperforms existing methods in variety image classification tasks, achieving faster model convergence and superior performance when dealing with non-IID data distributions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.