Abstract

Federated learning has been a hot topic in the field of artificial intelligence in recent years due to its distributed nature and emphasis on privacy protection. To better align with real-world scenarios, federated class incremental learning (FCIL) has emerged as a new research trend, but it faces challenges such as heterogeneous data, catastrophic forgetting, and inter-client interference. However, most existing methods enhance model performance at the expense of privacy, such as uploading prototypes or samples, which violates the basic principle of only transmitting models in federated learning. This paper presents a novel selective knowledge fusion (FedSKF) model to address data heterogeneity and inter-client interference without sacrificing any privacy. Specifically, this paper introduces a PIT (projection in turn) module on the server side to indirectly recover client data distribution information through optimal transport. Subsequently, to reduce inter-client interference, knowledge of the global model is selectively absorbed via knowledge distillation and an incomplete synchronization classifier at the client side, namely an SKS (selective knowledge synchronization) module. Furthermore, to mitigate global catastrophic forgetting, a global forgetting loss is proposed to distill knowledge from the old global model. Our framework can easily integrate various CIL methods, allowing it to adapt to application scenarios with varying privacy requirements. We conducted extensive experiments on CIFAR100 and Tiny-ImageNet datasets, and the performance of our method surpasses existing works.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call