Abstract

Using deep learning for semantic segmentation of medical images is a popular topic of wise medical. The premise of training an efficient deep learning model is to have a large number of medical images with annotations. Most medical images are scattered in hospitals or research institutions, and professionals such as doctors always don’t have enough time to label the images. Besides, due to the constraints of privacy protection regulations like GDPR, sharing data directly between multiple institutions is prohibited. To solve the obstacles above, we propose an efficient federated learning model SU-Net for brain tumor segmentation. We introduce inception module and dense block into standard U-Net to comprise our SU-Net with multi-scale receptive fields and information reusing. We conduct experiments on the LGG (Low-Grade Glioma) Segmentation dataset “Brain MRI Segmentation” in Kaggle. The results show that, in non-federated scenario, SU-Net achieves a AUC (Area Under Curve which measures classification accuracy) of \(99.7\%\) and a DSC (Dice Similarity Coefficient which measures segmentation accuracy) of \(78.5\%\), which are remarkably higher than the state-of-the-art semantic segmentation model DeepLabv3+ and the classical model U-Net dedicated to semantic segmentation of medical images. In federated scenario, SU-Net still outperforms the baselines.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call