Federated Learning (FL) is a framework designed to enable distributed and privacy-preserving learning. It typically involves a central server coordinating multiple clients to collaboratively train a global model without the need for sharing private data. Recent studies have revealed that this parameter-only communication strategy can be hindered by non-independent and identically distributed (non-iid) data across clients, leading to statistical heterogeneity. This issue necessitates the development of Personalized Federated Learning (PFL), which aims to enhance the performance of individual clients while still allowing for the learning of a global model. Most existing work on PFL has focused on label shift in supervised tasks, with relatively little attention given to feature shift, another common problem in real-world applications of federated learning. For instance, client data may vary in terms of illumination, angles, and quality for natural images, or in imaging protocols for medical images. To address this issue, we propose a general yet powerful framework that leverages the geometric interpretation of deep neural networks. Geometrically, a deep neural network partitions the feature space into a Power Diagram (PD) or Voronoi Diagram (VD). Regularized losses are introduced to mitigate feature shift from the perspective of PD and VD. Additionally, a personalized generator network is formulated for each client to rectify feature distribution and adapt the input data from its own domain to an ”averaged” or unified domain that generalizes well across all clients. Our method demonstrates competitive performance compared to current state-of-the-art methods.
Read full abstract