Fantom: Federated Adversarial Network for Training Multi-Sequence Magnetic Resonance Imaging in Semantic Segmentation

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Ischemic stroke lesions (ISL) segmentation aids clinicians in the diagnosis of stroke in acute care units. But, a generalized segmentation model requires data from various patients. However considering the data privacy, the patient’s data is not available for centralized training. The Federated Learning (FL) framework overcomes this, but in FL, semantic segmentation is challenging due to the complex model, adversarial training, and non-independent and identically distributed dataset. In this work, we address these drawbacks for the segmentation of ISL into core and penumbra using multi-sequence magnetic resonance imaging data. Instead of applying cordinate-wise weight aggregation which is normally followed in vanilla FL aggregation strategies, we have applied the concept of neural matching in the proposed method named FANTOM. It helped in faster convergence of the training when performed with different non-IID data across clients. We have adversarially trained and tested our segmentation model comprises of generator and discriminator for the ISLES-2015 dataset. It is observed that keeping the discriminator model locally and aggregating only the generator model not only performs well but also lowers the communication burden in the FL framework. Also, FANTOM outperformed centralized training, attaining average dice and precision scores of $0.7713 \pm 0.03$ and $0.7875 \pm 0.01$ for segmentation.

Similar Papers
  • Research Article
  • Cite Count Icon 5
  • 10.1016/j.cmpb.2022.107041
Classification of myocardial fibrosis in DE-MRI based on semi-supervised semantic segmentation and dual attention mechanism
  • Jul 26, 2022
  • Computer Methods and Programs in Biomedicine
  • Yuhan Ding + 3 more

Classification of myocardial fibrosis in DE-MRI based on semi-supervised semantic segmentation and dual attention mechanism

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/ivcnz.2010.6148803
Computer-assisted segmentation of brain tumor lesions from multi-sequence Magnetic Resonance Imaging using the Mumford-Shah model
  • Nov 1, 2010
  • Jihan M Zoghbi + 2 more

Segmentation of brain lesions in Magnetic Resonance Imaging (MRI) is a difficult task to be mastered by the specialist. This is due to the presence of noise, partial volume effects and susceptibility artifacts in the images and on the borders of the regions of interest. These problems can interfere with the results when manual segmentation is used. Manual segmentation uses local anatomic information based on the user's background; that implies the necessity of constant human intervention. Deformable model approaches attempt to minimize these drawbacks by outlining the region of interest semi-automatically. These methods have been shown to be effective in the extraction of the lesion boundaries in brain MR images. The proposed method employs the multi-channel version of the Mumford-Shah model via level set methods in order to segment multi-sequence brain magnetic resonance (MR) images: FLAIR (Fluid attenuated inversion recovery), T 1 and T 2 - weighted images. Results showed that segmentation of multi-sequence images using this methodology yielded superior results than using each sequence alone. As a consequence, medical doctors can exploit the segmentation results to follow up their patients' status by controlling the evolution or involution of brain lesions.

  • Research Article
  • Cite Count Icon 21
  • 10.1016/j.bspc.2016.06.016
Multimodal spatial-based segmentation framework for white matter lesions in multi-sequence magnetic resonance images
  • Jul 21, 2016
  • Biomedical Signal Processing and Control
  • Tianming Zhan + 5 more

Multimodal spatial-based segmentation framework for white matter lesions in multi-sequence magnetic resonance images

  • Conference Article
  • Cite Count Icon 4
  • 10.1109/clei47609.2019.235102
Improving Semantic Segmentation of 3D Medical Images on 3D Convolutional Neural Networks
  • Sep 1, 2019
  • Alejandra Marquez Herrera + 2 more

A neural network is a mathematical model that is able to perform a task automatically or semi-automatically after learning the human knowledge that we provided. Moreover, a Convolutional Neural Network (CNN) is a type of neural network that has shown to efficiently learn tasks related to the area of image analysis, such as image segmentation, whose main purpose is to find regions or separable objects within an image. A more specific type of segmentation, called semantic segmentation, guarantees that each region has a semantic meaning by giving it a label or class. Since CNNs can automate the task of image semantic segmentation, they have been very useful for the medical area, applying them to the segmentation of organs or abnormalities (tumors). This work aims to improve the task of binary semantic segmentation of volumetric medical images acquired by Magnetic Resonance Imaging (MRI) using a preexisting Three-Dimensional Convolutional Neural Network (3D CNN) architecture. We propose a formulation of a loss function for training this 3D CNN, for improving pixel-wise segmentation results. This loss function is formulated based on the idea of adapting a similarity coefficient, used for measuring the spatial overlap between the prediction and ground truth, and then using it to train the network. As contribution, the developed approach achieved good performance in a context where the pixel classes are imbalanced. We show how the choice of the loss function for training can affect the final quality of the segmentation. We validate our proposal over two medical image semantic segmentation datasets and show comparisons in performance between the proposed loss function and other pre-existing loss functions used for binary semantic segmentation.

  • Research Article
  • 10.19153/cleiej.23.1.4
Semantic Segmentation of 3D Medical Images with 3D Convolutional Neural Networks
  • Apr 1, 2020
  • CLEI Electronic Journal
  • Alejandra Márquez Herrera + 2 more

A neural network is a mathematical model that is able to perform a task automatically or semi-automatically after learning the human knowledge that we provided. Moreover, a Convolutional Neural Network (CNN) is a type of neural network that has shown to efficiently learn tasks related to the area of image analysis, such as image segmentation, whose main purpose is to find regions or separable objects within an image. A more specific type of segmentation, called semantic segmentation, guarantees that each region has a semantic meaning by giving it a label or class. Since CNNs can automate the task of image semantic segmentation, they have been very useful for the medical area, applying them to the segmentation of organs or abnormalities (tumors). This work aims to improve the task of binary semantic segmentation of volumetric medical images acquired by Magnetic Resonance Imaging (MRI) using a pre-existing Three-Dimensional Convolutional Neural Network (3D CNN) architecture. We propose a formulation of a loss function for training this 3D CNN, for improving pixel-wise segmentation results. This loss function is formulated based on the idea of adapting a similarity coefficient, used for measuring the spatial overlap between the prediction and ground truth, and then using it to train the network. As contribution, the developed approach achieved good performance in a context where the pixel classes are imbalanced. We show how the choice of the loss function for training can affect the nal quality of the segmentation. We validate our proposal over two medical image semantic segmentation datasets and show comparisons in performance between the proposed loss function and other pre-existing loss functions used for binary semantic segmentation.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 56
  • 10.1093/jrr/rrz063
Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy
  • Dec 10, 2019
  • Journal of Radiation Research
  • Yuhei Koike + 9 more

ABSTRACTThe aim of this work is to generate synthetic computed tomography (sCT) images from multi-sequence magnetic resonance (MR) images using an adversarial network and to assess the feasibility of sCT-based treatment planning for brain radiotherapy. Datasets for 15 patients with glioblastoma were selected and 580 pairs of CT and MR images were used. T1-weighted, T2-weighted and fluid-attenuated inversion recovery MR sequences were combined to create a three-channel image as input data. A conditional generative adversarial network (cGAN) was trained using image patches. The image quality was evaluated using voxel-wise mean absolute errors (MAEs) of the CT number. For the dosimetric evaluation, 3D conformal radiotherapy (3D-CRT) and volumetric modulated arc therapy (VMAT) plans were generated using the original CT set and recalculated using the sCT images. The isocenter dose and dose–volume parameters were compared for 3D-CRT and VMAT plans, respectively. The equivalent path length was also compared. The mean MAEs for the whole body, soft tissue and bone region were 108.1 ± 24.0, 38.9 ± 10.7 and 366.2 ± 62.0 hounsfield unit, respectively. The dosimetric evaluation revealed no significant difference in the isocenter dose for 3D-CRT plans. The differences in the dose received by 2% of the volume (D2%), D50% and D98% relative to the prescribed dose were <1.0%. The overall equivalent path length was shorter than that for real CT by 0.6 ± 1.9 mm. A treatment planning study using generated sCT detected only small, clinically negligible differences. These findings demonstrated the feasibility of generating sCT images for MR-only radiotherapy from multi-sequence MR images using cGAN.

  • Research Article
  • Cite Count Icon 7
  • 10.1038/s41598-023-33900-x
Improvement of semantic segmentation through transfer learning of multi-class regions with convolutional neural networks on supine and prone breast MRI images
  • Apr 27, 2023
  • Scientific Reports
  • Sungwon Ham + 5 more

Semantic segmentation of breast and surrounding tissues in supine and prone breast magnetic resonance imaging (MRI) is required for various kinds of computer-assisted diagnoses for surgical applications. Variability of breast shape in supine and prone poses along with various MRI artifacts makes it difficult to determine robust breast and surrounding tissue segmentation. Therefore, we evaluated semantic segmentation with transfer learning of convolutional neural networks to create robust breast segmentation in supine breast MRI without considering supine or prone positions. Total 29 patients with T1-weighted contrast-enhanced images were collected at Asan Medical Center and two types of breast MRI were performed in the prone position and the supine position. The four classes, including lungs and heart, muscles and bones, parenchyma with cancer, and skin and fat, were manually drawn by an expert. Semantic segmentation on breast MRI scans with supine, prone, transferred from prone to supine, and pooled supine and prone MRI were trained and compared using 2D U-Net, 3D U-Net, 2D nnU-Net and 3D nnU-Net. The best performance was 2D models with transfer learning. Our results showed excellent performance and could be used for clinical purposes such as breast registration and computer-aided diagnosis.

  • Research Article
  • Cite Count Icon 2
  • 10.1007/s12194-025-00901-6
Semantic segmentation for individual thigh skeletal muscles of athletes on magnetic resonance images.
  • Mar 19, 2025
  • Radiological physics and technology
  • Jun Kasahara + 4 more

The skeletal muscles that athletes should train vary depending on their discipline and position. Therefore, individual skeletal muscle cross-sectional area assessment is important in the development of training strategies. To measure the cross-sectional area of skeletal muscle, manual segmentation of each muscle is performed using magnetic resonance (MR) imaging. This task is time-consuming and requires significant effort. Additionally, interobserver variability can sometimes be problematic. The purpose of this study was to develop an automated computerized method for semantic segmentation of individual thigh skeletal muscles from MR images of athletes. Our database consisted of 697 images from the thighs of 697 elite athletes. The images were randomly divided into a training dataset (70%), a validation dataset (10%), and a test dataset (20%). A label image was generated for each image by manually annotating 15 object classes: 12 different skeletal muscles, fat, bones, and vessels and nerves. Using the validation dataset, DeepLab v3+ was chosen from three different semantic segmentation models as a base model for segmenting individual thigh skeletal muscles. The feature extractor in DeepLab v3+ was also optimized to ResNet50. The mean Jaccard index and Dice index for the proposed method were 0.853 and 0.916, respectively, which were significantly higher than those from conventional DeepLab v3+ (Jaccard index: 0.810, p < .001; Dice index: 0.887, p < .001). The proposed method achieved a mean area error for 15 objective classes of 3.12%, useful in the assessment of skeletal muscle cross-sectional area from MR images.

  • Research Article
  • Cite Count Icon 2
  • 10.32604/csse.2022.022314
Make U-Net Greater: An Easy-to-Embed Approach to Improve Segmentation Performance Using Hypergraph
  • Jan 1, 2022
  • Computer Systems Science and Engineering
  • Jing Peng + 7 more

Cardiac anatomy segmentation is essential for cardiomyopathy clinical diagnosis and treatment planning. Thus, accurate delineation of target volumes at risk in cardiac anatomy is important. However, manual delineation is a time-consuming and labor-intensive process for cardiologists and has been shown to lead to significant inter-and intra-practitioner variability. Thus, computer-aided or fully automatic segmentation methods are required. They can significantly economize on manpower and improve treatment efficiency. Recently, deep convolutional neural network (CNN) based methods have achieved remarkable successes in various kinds of vision tasks, such as classification, segmentation and object detection. Semantic segmentation can be considered as a pixel-wise task, it requires high-level abstract semantics information while maintaining spatial detail contexts. Long-range context information plays a crucial role in this scenario. However, the traditional convolution kernel only provides the local and small size of the receptive field. To address the problem, we propose a plug-and-play module aggregating both local and global information (aka LGIA module) to capture the high-order relationship between nodes that are far apart. We incorporate both local and global correlations into hypergraph which is able to capture high-order relationships between nodes via the concept of a hyperedge connecting a subset of nodes. The local correlation considers neighborhood nodes that are spatially adjacent and similar in the same CNN feature maps of magnetic resonance (MR) image; and the global correlation is searched from a batch of CNN feature maps of MR images in feature space. The influence of these two correlations on semantic segmentation is complementary. We validated our LGIA module on various CNN segmentation models with the cardiac MR images dataset. Experimental results demonstrate that our approach outperformed several baseline models.

  • Research Article
  • Cite Count Icon 22
  • 10.1038/s41598-024-84692-7
Explainable artificial intelligence with UNet based segmentation and Bayesian machine learning for classification of brain tumors using MRI images
  • Jan 3, 2025
  • Scientific Reports
  • K Lakshmi + 5 more

Detecting brain tumours (BT) early improves treatment possibilities and increases patient survival rates. Magnetic resonance imaging (MRI) scanning offers more comprehensive information, such as better contrast and clarity, than any alternative scanning process. Manually separating BTs from several MRI images gathered in medical practice for cancer analysis is challenging and time-consuming. Tumours and MRI scans of the brain are exposed utilizing methods and machine learning technologies, simplifying the process for doctors. MRI images can sometimes appear normal even when a patient has a tumour or malignancy. Deep learning approaches have recently depended on deep convolutional neural networks to analyze medical images with promising outcomes. It supports saving lives faster and rectifying some medical errors. With this motivation, this article presents a new explainable artificial intelligence with semantic segmentation and Bayesian machine learning for brain tumors (XAISS-BMLBT) technique. The presented XAISS-BMLBT technique mainly concentrates on the semantic segmentation and classification of BT in MRI images. The presented XAISS-BMLBT approach initially involves bilateral filtering-based image pre-processing to eliminate the noise. Next, the XAISS-BMLBT technique performs the MEDU-Net+ segmentation process to define the impacted brain regions. For the feature extraction process, the ResNet50 model is utilized. Furthermore, the Bayesian regularized artificial neural network (BRANN) model is used to identify the presence of BTs. Finally, an improved radial movement optimization model is employed for the hyperparameter tuning of the BRANN technique. To highlight the improved performance of the XAISS-BMLBT technique, a series of simulations were accomplished by utilizing a benchmark database. The experimental validation of the XAISS-BMLBT technique portrayed a superior accuracy value of 97.75% over existing models.

  • Research Article
  • Cite Count Icon 5
  • 10.3390/app13148028
Non-Invasive Estimation of Gleason Score by Semantic Segmentation and Regression Tasks Using a Three-Dimensional Convolutional Neural Network
  • Jul 9, 2023
  • Applied Sciences
  • Takaaki Yoshimura + 2 more

The Gleason score (GS) is essential in categorizing prostate cancer risk using biopsy. The aim of this study was to propose a two-class GS classification (&lt; and ≥GS 7) methodology using a three-dimensional convolutional neural network with semantic segmentation to predict GS non-invasively using multiparametric magnetic resonance images (MRIs). Four training datasets of T2-weighted images and apparent diffusion coefficient maps with and without semantic segmentation were used as test images. All images and lesion information were selected from a training cohort of the Society of Photographic Instrumentation Engineers, the American Association of Physicists in Medicine, and the National Cancer Institute (SPIE–AAPM–NCI) PROSTATEx Challenge dataset. Precision, recall, overall accuracy and area under the receiver operating characteristics curve (AUROC) were calculated from this dataset, which comprises publicly available prostate MRIs. Our data revealed that the GS ≥ 7 precision (0.73 ± 0.13) and GS &lt; 7 recall (0.82 ± 0.06) were significantly higher using semantic segmentation (p &lt; 0.05). Moreover, the AUROC in segmentation volume was higher than that in normal volume (ADCmap: 0.70 ± 0.05 and 0.69 ± 0.08, and T2WI: 0.71 ± 0.07 and 0.63 ± 0.08, respectively). However, there were no significant differences in overall accuracy between the segmentation and normal volume. This study generated a diagnostic method for non-invasive GS estimation from MRIs.

  • Research Article
  • Cite Count Icon 39
  • 10.1016/j.media.2019.07.003
Holistic decomposition convolution for effective semantic segmentation of medical volume images.
  • Jul 8, 2019
  • Medical Image Analysis
  • Guodong Zeng + 1 more

Holistic decomposition convolution for effective semantic segmentation of medical volume images.

  • Research Article
  • Cite Count Icon 40
  • 10.1186/s12911-019-0988-4
A comparison between two semantic deep learning frameworks for the autosomal dominant polycystic kidney disease segmentation based on magnetic resonance images
  • Dec 1, 2019
  • BMC Medical Informatics and Decision Making
  • Vitoantonio Bevilacqua + 6 more

BackgroundThe automatic segmentation of kidneys in medical images is not a trivial task when the subjects undergoing the medical examination are affected by Autosomal Dominant Polycystic Kidney Disease (ADPKD). Several works dealing with the segmentation of Computed Tomography images from pathological subjects were proposed, showing high invasiveness of the examination or requiring interaction by the user for performing the segmentation of the images. In this work, we propose a fully-automated approach for the segmentation of Magnetic Resonance images, both reducing the invasiveness of the acquisition device and not requiring any interaction by the users for the segmentation of the images.MethodsTwo different approaches are proposed based on Deep Learning architectures using Convolutional Neural Networks (CNN) for the semantic segmentation of images, without needing to extract any hand-crafted features. In details, the first approach performs the automatic segmentation of images without any procedure for pre-processing the input. Conversely, the second approach performs a two-steps classification strategy: a first CNN automatically detects Regions Of Interest (ROIs); a subsequent classifier performs the semantic segmentation on the ROIs previously extracted.ResultsResults show that even though the detection of ROIs shows an overall high number of false positives, the subsequent semantic segmentation on the extracted ROIs allows achieving high performance in terms of mean Accuracy. However, the segmentation of the entire images input to the network remains the most accurate and reliable approach showing better performance than the previous approach.ConclusionThe obtained results show that both the investigated approaches are reliable for the semantic segmentation of polycystic kidneys since both the strategies reach an Accuracy higher than 85%. Also, both the investigated methodologies show performances comparable and consistent with other approaches found in literature working on images from different sources, reducing both the invasiveness of the analyses and the interaction needed by the users for performing the segmentation task.

  • Conference Article
  • Cite Count Icon 51
  • 10.1109/iccv48922.2021.00681
Local Temperature Scaling for Probability Calibration
  • Oct 1, 2021
  • Zhipeng Ding + 3 more

For semantic segmentation, label probabilities are often uncalibrated as they are typically only the by-product of a segmentation task. Intersection over Union (IoU) and Dice score are often used as criteria for segmentation success, while metrics related to label probabilities are not often explored. However, probability calibration approaches have been studied, which match probability outputs with experimentally observed errors. These approaches mainly focus on classification tasks, but not on semantic segmentation. Thus, we propose a learning-based calibration method that focuses on multi-label semantic segmentation. Specifically, we adopt a convolutional neural network to predict local temperature values for probability calibration. One advantage of our approach is that it does not change prediction accuracy, hence allowing for calibration as a postprocessing step. Experiments on the COCO, CamVid, and LPBA40 datasets demonstrate improved calibration performance for a range of different metrics. We also demonstrate the good performance of our method for multi-atlas brain segmentation from magnetic resonance images.

  • Research Article
  • Cite Count Icon 10
  • 10.1007/s12194-021-00633-3
Simultaneous brain structure segmentation in magnetic resonance images using deep convolutional neural networks.
  • Aug 2, 2021
  • Radiological Physics and Technology
  • Tomoko Maruyama + 7 more

In brain magnetic resonance imaging (MRI) examinations, rapidly acquired two-dimensional (2D) T1-weighted sagittal slices are typically used to confirm brainstem atrophy and the presence of signals in the posterior pituitary gland. Image segmentation is essential for the automatic evaluation of chronological changes in the brainstem and pituitary gland. Thus, the purpose of our study was to use deep learning to automatically segment internal organs (brainstem, corpus callosum, pituitary, cerebrum, and cerebellum) in midsagittal slices of 2D T1-weighted images. Deep learning for the automatic segmentation of seven regions in the images was accomplished using two different methods: patch-based segmentation and semantic segmentation. The networks used for patch-based segmentation were AlexNet, GoogLeNet, and ResNet50, whereas semantic segmentation was accomplished using SegNet, VGG16-weighted SegNet, and U-Net. The precision and Jaccard index were calculated, and the extraction accuracy of the six convolutional network (DCNN) systems was evaluated. The highest precision (0.974) was obtained with the VGG16-weighted SegNet, and the lowest precision (0.506) was obtained with ResNet50. Based on the data, calculation times, and Jaccard indices obtained in this study, segmentation on a 2D image may be considered a viable and effective approach. We found that the optimal automatic segmentation of organs (brainstem, corpus callosum, pituitary, cerebrum, and cerebellum) on brain sagittal T1-weighted images could be achieved using SegNet with VGG16.

Save Icon
Up Arrow
Open/Close