Abstract

U-shaped networks have become prevalent in various medical image tasks such as segmentation, and restoration. However, most existing U-shaped networks rely on centralized learning which raises privacy concerns. To address these issues, federated learning (FL) and split learning (SL) have been proposed. However, achieving a balance between the local computational cost, model privacy, and parallel training remains a challenge. In this articler, we propose a novel hybrid learning paradigm called Dynamic Corrected Split Federated Learning (DC-SFL) for U-shaped medical image networks. To preserve data privacy, including the input, model parameters, label and output simultaneously, we propose to split the network into three parts hosted by different parties. We propose a Dynamic Weight Correction Strategy (DWCS) to stabilize the training process and avoid the model drift problem due to data heterogeneity. To further enhance privacy protection and establish a trustworthy distributed learning paradigm, we propose to introduce additively homomorphic encryption into the aggregation process of client-side model, which helps prevent potential collusion between parties and provides a better privacy guarantee for our proposed method. The proposed DC-SFL is evaluated on various medical image tasks, and the experimental results demonstrate its effectiveness. In comparison with state-of-the-art distributed learning methods, our method achieves competitive performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call