Abstract

In parallel with the rapid adoption of deep learning to multimedia data analysis, there has been growing awareness and concerns about data security and privacy. The recent advancement of federated learning enables many network clients to collaboratively train a model under the orchestration of a central server while preserving clients’ privacy. However, the standard assumption of independent and identical distribution (IID) may be broken under the federated learning because data label preferences may vary across clients. Recent efforts address this issue either by adapting a strong global model for each local model, respectively, or by training individual local models for similar clients together. However, both strategies degrade in highly non-IID scenarios. This work introduces a novel method, deep cooperative learning (DCL), to address this problem. It leverages the reciprocal structure between deep learning tasks in different clients to obtain effective feedback signals to enhance the learning process of personalized local models. To the best of our knowledge, this is the first time the non-IID is addressed under the principle of task interactions. We demonstrated the effectiveness of DCL on the two tasks of medical multimedia data analysis. The results show that our method presents a significant performance improvement compared with the standard federated learning method. In conclusion, this work developed a method for addressing non-IID problems in deep-learning-based privacy preservation learning. It allows the highly non-IID data to be used to improve the local model performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call