Personalized federated learning (PFL) framework that targets individual models for optimization provides better privacy and flexibility for clients. However, in challenging intelligent sensing applications, the heterogeneous client’s data distributions make the aggregation of local models in the server unstable or even hard to converge. To deal with the performance degradation caused by the above problem, existing PFL methods focus more on how to fine-tune the global model but ignore the impact of the global model fusion algorithm on the results. In this paper, we propose a new explainable neural-aware decoupling fusion based PFL framework, p-FedADF , to address the above challenges. It contains two carefully designed modules. The local decoupling module deployed on the client, utilizes the architecture disentangle technique to decouple the feature extractors in the client’s local model into sub-network according to data categories. It obtains the inference process of feature extraction for different categories of data by training. The global aggregation module, deployed on the server, aligns the sub-networks positions for multiple clients and implements a fine-grained generic feature extractor aggregation. In addition, we provide a mask encoding scheme to reduce the communication overhead of transmitting the sub-network sets between the server and clients. Our p-FedADF obtains 1.6%, 0.2%, 2.3%, and 4.5% improvement on a real-world dataset and three benchmark datasets, compared to state-of-the-art (SOTA) methods.
Read full abstract