Abstract
Federated Learning (FL) has emerged as a promising approach to collaborative machine learning without the need to share raw data. It enables decentralized model updates while preserving the privacy of each device and reducing the communication overhead. This experiment evaluates the effectiveness of the personalized FL algorithms, namely FedAvg, APPLE, FedBABU and FedProto, in a decentralized setting, with a particular focus on the Fashion MNIST dataset, which is characterized by a non-ideal data distribution. The objective is to identify which algorithm performs optimally in image classification tasks. The experimental results show that both FedProto and APPLE have nearly equivalent and better performance compared to FedBABU and FedAvg. Interestingly, increasing the number of uploads in FedBABU leads to similar results to APPLE and FedProto. However, under limited upload conditions, FedBABU performs similarly to FedAvg. These results provide valuable insights into the differential performance of personalized FL algorithms in non-id data scenarios and provide guidance for their application in distributed environments, especially in sensitive domains such as medical, military and confidential image analysis tasks where privacy and communication efficiency are paramount concerns.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.