Segmentation of glioma sub-regions based on EnnUnet in tumor treating fields

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Accurate segmentation of glioblastoma (GBM), including the whole tumor (WT), tumor core (TC), and enhancing tumor (ET), from multi-modal magnetic resonance images (MRI) is essential for precise Tumor Treating Fields (TTFields) simulation. This study aims to address the challenges of this segmentation task to improve the accuracy of TTFields simulation results. We propose enhanced nnUnet (EnnUnet), a novel framework for multi-modal MRI segmentation that enhances the robust and widely-used nnUnet architecture. This advanced architecture integrates three key innovations: (1) Generalized Multi-kernel Convolution blocks are incorporated to capture multi-scale features and long-range dependencies. (2) A dual attention mechanism is employed at skip connections to refine feature fusion. (3) A novel boundary and Top-K loss is implemented for boundary-based refinement and to focus the training process on hard-to-segment pixels. The effectiveness of each enhancement was systematically evaluated through an ablation study on the BraTS 2023 dataset. The final EnnUnet model achieved superior performance, with average Dice scores of 93.52%, 92.07%, and 87.60% for the WT, TC, and ET, respectively, consistently outperforming other state-of-the-art methods. Furthermore, TTFields simulations on real patient data demonstrated that our precise segmentations yield more realistic electric field distributions compared to simplified homogeneous tumor models. The proposed EnnUnet architecture showcases promising potential for highly accurate and robust glioma segmentation. It offers a more reliable foundation for computational modeling, which is essential for enhancing the precision of TTFields treatment planning and advancing personalized therapeutic strategies for GBM patients.

Similar Papers
  • PDF Download Icon
  • Research Article
  • Cite Count Icon 32
  • 10.3389/fradi.2021.704888
Brain Tumor Segmentation From Multi-Modal MR Images via Ensembling UNets.
  • Oct 21, 2021
  • Frontiers in Radiology
  • Yue Zhang + 8 more

Glioma is a type of severe brain tumor, and its accurate segmentation is useful in surgery planning and progression evaluation. Based on different biological properties, the glioma can be divided into three partially-overlapping regions of interest, including whole tumor (WT), tumor core (TC), and enhancing tumor (ET). Recently, UNet has identified its effectiveness in automatically segmenting brain tumor from multi-modal magnetic resonance (MR) images. In this work, instead of network architecture, we focus on making use of prior knowledge (brain parcellation), training and testing strategy (joint 3D+2D), ensemble and post-processing to improve the brain tumor segmentation performance. We explore the accuracy of three UNets with different inputs, and then ensemble the corresponding three outputs, followed by post-processing to achieve the final segmentation. Similar to most existing works, the first UNet uses 3D patches of multi-modal MR images as the input. The second UNet uses brain parcellation as an additional input. And the third UNet is inputted by 2D slices of multi-modal MR images, brain parcellation, and probability maps of WT, TC, and ET obtained from the second UNet. Then, we sequentially unify the WT segmentation from the third UNet and the fused TC and ET segmentation from the first and the second UNets as the complete tumor segmentation. Finally, we adopt a post-processing strategy by labeling small ET as non-enhancing tumor to correct some false-positive ET segmentation. On one publicly-available challenge validation dataset (BraTS2018), the proposed segmentation pipeline yielded average Dice scores of 91.03/86.44/80.58% and average 95% Hausdorff distances of 3.76/6.73/2.51 mm for WT/TC/ET, exhibiting superior segmentation performance over other state-of-the-art methods. We then evaluated the proposed method on the BraTS2020 training data through five-fold cross validation, with similar performance having also been observed. The proposed method was finally evaluated on 10 in-house data, the effectiveness of which has been established qualitatively by professional radiologists.

  • Research Article
  • Cite Count Icon 7
  • 10.3934/mbe.2023773
SDS-Net: A lightweight 3D convolutional neural network with multi-branch attention for multimodal brain tumor accurate segmentation.
  • Jan 1, 2023
  • Mathematical Biosciences and Engineering
  • Qian Wu + 4 more

The accurate and fast segmentation method of tumor regions in brain Magnetic Resonance Imaging (MRI) is significant for clinical diagnosis, treatment and monitoring, given the aggressive and high mortality rate of brain tumors. However, due to the limitation of computational complexity, convolutional neural networks (CNNs) face challenges in being efficiently deployed on resource-limited devices, which restricts their popularity in practical medical applications. To address this issue, we propose a lightweight and efficient 3D convolutional neural network SDS-Net for multimodal brain tumor MRI image segmentation. SDS-Net combines depthwise separable convolution and traditional convolution to construct the 3D lightweight backbone blocks, lightweight feature extraction (LFE) and lightweight feature fusion (LFF) modules, which effectively utilizes the rich local features in multimodal images and enhances the segmentation performance of sub-tumor regions. In addition, 3D shuffle attention (SA) and 3D self-ensemble (SE) modules are incorporated into the encoder and decoder of the network. The SA helps to capture high-quality spatial and channel features from the modalities, and the SE acquires more refined edge features by gathering information from each layer. The proposed SDS-Net was validated on the BRATS datasets. The Dice coefficients were achieved 92.7, 80.0 and 88.9% for whole tumor (WT), enhancing tumor (ET) and tumor core (TC), respectively, on the BRTAS 2020 dataset. On the BRTAS 2021 dataset, the Dice coefficients were 91.8, 82.5 and 86.8% for WT, ET and TC, respectively. Compared with other state-of-the-art methods, SDS-Net achieved superior segmentation performance with fewer parameters and less computational cost, under the condition of 2.52 M counts and 68.18 G FLOPs.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/isnib57382.2022.10075787
A Deep Learning-based 3D-GAN for Glioma Subregions Detection and Segmentation in Multimodal Brain MRI volumes
  • Dec 7, 2022
  • Adel Kermi + 3 more

Accurate, fast, and automatic gliomas segmentation from brain magnetic resonance imaging (MRI) scans is crucial for brain cancer diagnosis and therapy, but it remains, until nowadays, challenging because of variations in location, form, and imaging intensity of gliomas. This paper presents an automatic high- and low-grade glioma (HGG and LGG) sub-regions segmentation technique using deep learning-based three-dimensional generative adversarial network (3D-GAN) that localizes and segments the whole gliomas and their intra-regions, comprising necroses, edemas and active tumors, in multimodal brain MRI volumes of BraTS'2022 Datasets. Experimental tests and evaluations of the proposed GAN model have been realized on both, BraTS'2022 training and validation datasets, containing 5880 brain MRIs corresponding to 1470 different subjects with HGG and LGG of various sizes, forms, locations, and intensities. Dice coefficients for enhancing tumor (ET), whole tumor (WT), and tumor core (TC) reached 0.770, 0.872 and 0.832 respectively on 20 % of the training dataset. On the challenge validation dataset, our model achieved an average ET, WT, and TC Dice score of 0.774, 0.907 and 0.829 respectively.

  • Research Article
  • Cite Count Icon 19
  • 10.1117/1.jmi.6.2.024005
Radiomics-based convolutional neural network for brain tumor segmentation on multiparametric magnetic resonance imaging.
  • May 7, 2019
  • Journal of Medical Imaging
  • Prateek Prasanna + 3 more

Accurate segmentation of gliomas on routine magnetic resonance image (MRI) scans plays an important role in disease diagnosis, prognosis, and patient treatment planning. We present a fully automated approach, radiomics-based convolutional neural network (RadCNN), for segmenting both high- and low-grade gliomas using multimodal MRI volumes (T1c, T2w, and FLAIR). RadCNN incorporates radiomic texture features (i.e., Haralick, Gabor, and Laws) within DeepMedic [a deep 3-D convolutional neural network (CNN) segmentation framework that uses image intensities; a top performing method in the BraTS 2016 challenge] to further augment the performance of brain tumor subcompartment segmentation. We first identify textural radiomic representations that best separate the different subcompartments [enhancing tumor (ET), whole tumor (WT), and tumor core (TC)] on the training set, and then feed these representations as inputs to the CNN classifier for prediction of different subcompartments. We hypothesize that textural radiomic representations of lesion subcompartments will enhance the separation of subcompartment boundaries, and hence providing these features as inputs to the deep CNN, over and above raw intensity values alone, will improve the subcompartment segmentation. Using a training set of patients, validation set of , and test set of patients, RadCNN method achieved Dice similarity coefficient (DSC) scores of 0.71, 0.89, and 0.73 for ET, WT, and TC, respectively. Compared to the DeepMedic model, RadCNN showed improvement in DSC scores for both ET and WT and demonstrated comparable results in segmenting the TC. Similarly, smaller Hausdorff distance measures were obtained with RadCNN as compared to the DeepMedic model across all the subcompartments. Following the segmentation of the different subcompartments, we extracted a set of subcompartment specific radiomic descriptors that capture lesion disorder and assessed their ability in separating patients into different survival cohorts (short-, mid- and long-term survival) based on their overall survival from the date of baseline diagnosis. Using a multilinear regression approach, we achieved accuracies of 0.57, 0.63, and 0.45 for the training, validation, and test cases, respectively.

  • Research Article
  • Cite Count Icon 134
  • 10.1016/j.bspc.2022.103861
DResU-Net: 3D deep residual U-Net based brain tumor segmentation from multimodal MRI
  • Jun 14, 2022
  • Biomedical Signal Processing and Control
  • Rehan Raza + 4 more

dResU-Net: 3D deep residual U-Net based brain tumor segmentation from multimodal MRI

  • Research Article
  • 10.12182/20240360208
Fully Automatic Glioma Segmentation Algorithm of Magnetic Resonance Imaging Based on 3D-UNet With More Global Contextual Feature Extraction: An Improvement on Insufficient Extraction of Global Features
  • Mar 20, 2024
  • Sichuan da xue xue bao. Yi xue ban = Journal of Sichuan University. Medical science edition
  • Hengyi Tian + 3 more

The fully automatic segmentation of glioma and its subregions is fundamental for computer-aided clinical diagnosis of tumors. In the segmentation process of brain magnetic resonance imaging (MRI), convolutional neural networks with small convolutional kernels can only capture local features and are ineffective at integrating global features, which narrows the receptive field and leads to insufficient segmentation accuracy. This study aims to use dilated convolution to address the problem of inadequate global feature extraction in 3D-UNet. 1) Algorithm construction: A 3D-UNet model with three pathways for more global contextual feature extraction, or 3DGE-UNet, was proposed in the paper. By using publicly available datasets from the Brain Tumor Segmentation Challenge (BraTS) of 2019 (335 patient cases), a global contextual feature extraction (GE) module was designed. This module was integrated at the first, second, and third skip connections of the 3D UNet network. The module was utilized to fully extract global features at different scales from the images. The global features thus extracted were then overlaid with the upsampled feature maps to expand the model's receptive field and achieve deep fusion of features at different scales, thereby facilitating end-to-end automatic segmentation of brain tumors. 2) Algorithm validation: The image data were sourced from the BraTs 2019 dataset, which included the preoperative MRI images of 335 patients across four modalities (T1, T1ce, T2, and FLAIR) and a tumor image with annotations made by physicians. The dataset was divided into the training, the validation, and the testing sets at an 8∶1∶1 ratio. Physician-labelled tumor images were used as the gold standard. Then, the algorithm's segmentation performance on the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) was evaluated in the test set using the Dice coefficient (for overall effectiveness evaluation), sensitivity (detection rate of lesion areas), and 95% Hausdorff distance (segmentation accuracy of tumor boundaries). The performance was tested using both the 3D-UNet model without the GE module and the 3DGE-UNet model with the GE module to internally validate the effectiveness of the GE module setup. Additionally, the performance indicators were evaluated using the 3DGE-UNet model, ResUNet, UNet++, nnUNet, and UNETR, and the convergence of these five algorithm models was compared to externally validate the effectiveness of the 3DGE-UNet model. 1) In internal validation, the enhanced 3DGE-UNet model achieved Dice mean values of 91.47%, 87.14%, and 83.35% for segmenting the WT, TC, and ET regions in the test set, respectively, producing the optimal values for comprehensive evaluation. These scores were superior to the corresponding scores of the traditional 3D-UNet model, which were 89.79%, 85.13%, and 80.90%, indicating a significant improvement in segmentation accuracy across all three regions (P<0.05). Compared with the 3D-UNet model, the 3DGE-UNet model demonstrated higher sensitivity for ET (86.46% vs. 80.77%) (P<0.05) , demonstrating better performance in the detection of all the lesion areas. When dealing with lesion areas, the 3DGE-UNet model tended to correctly identify and capture the positive areas in a more comprehensive way, thereby effectively reducing the likelihood of missed diagnoses. The 3DGE-UNet model also exhibited exceptional performance in segmenting the edges of WT, producing a mean 95% Hausdorff distance superior to that of the 3D-UNet model (8.17 mm vs. 13.61 mm, P<0.05). However, its performance for TC (8.73 mm vs. 7.47 mm) and ET (6.21 mm vs. 5.45 mm) was similar to that of the 3D-UNet model. 2) In the external validation, the other four algorithms outperformed the 3DGE-UNet model only in the mean Dice for TC (87.25%), the mean sensitivity for WT (94.59%), the mean sensitivity for TC (86.98%), and the mean 95% Hausdorff distance for ET (5.37 mm). Nonetheless, these differences were not statistically significant (P>0.05). The 3DGE-UNet model demonstrated rapid convergence during the training phase, outpacing the other external models. The 3DGE-UNet model can effectively extract and fuse feature information on different scales, improving the accuracy of brain tumor segmentation.

  • Research Article
  • Cite Count Icon 20
  • 10.1016/j.bspc.2022.103939
Brain tumor segmentation using a hybrid multi resolution U-Net with residual dual attention and deep supervision on MR images
  • Jun 27, 2022
  • Biomedical Signal Processing and Control
  • Subin Sahayam + 3 more

Brain tumor segmentation using a hybrid multi resolution U-Net with residual dual attention and deep supervision on MR images

  • Book Chapter
  • Cite Count Icon 15
  • 10.1007/978-3-030-46640-4_14
Aggregating Multi-scale Prediction Based on 3D U-Net in Brain Tumor Segmentation
  • Jan 1, 2020
  • Minglin Chen + 2 more

Magnetic resonance imaging (MRI) is the dominant modality used in the initial evaluation of patients with primary brain tumors due to its superior image resolution and high safety profile. Automated segmentation of brain tumors from MRI is critical in the determination of response to therapy. In this paper, we propose a novel method which aggregates multi-scale prediction from 3D U-Net to segment enhancing tumor (ET), whole tumor (WT) and tumor core (TC) from multimodal MRI. Multi-scale prediction is derived from the decoder part of 3D U-Net at different resolutions. The final prediction takes the minimum value of the corresponding pixel from the upsampling multi-scale prediction. Aggregating multi-scale prediction can add constraints to the network which is beneficial for limited data. Additionally, we employ model ensembling strategy to further improve the performance of the proposed network. Finally, we achieve dice scores of 0.7745, 0.8640 and 0.7914, and Hausdorff distances (95th percentile) of 4.2365, 6.9381 and 6.6026 for ET, WT and TC respectively on the test set in BraTS 2019.

  • Research Article
  • Cite Count Icon 30
  • 10.1016/j.bspc.2021.103442
Scale-adaptive super-feature based MetricUNet for brain tumor segmentation
  • Dec 8, 2021
  • Biomedical Signal Processing and Control
  • Yujian Liu + 7 more

Scale-adaptive super-feature based MetricUNet for brain tumor segmentation

  • Research Article
  • Cite Count Icon 8
  • 10.1088/1361-6560/ad0c8d
NnUnetFormer: an automatic method based on nnUnet and transformer for brain tumor segmentation with multimodal MR images
  • Dec 11, 2023
  • Physics in Medicine & Biology
  • Shunchao Guo + 4 more

Objective. Both local and global context information is crucial semantic features for brain tumor segmentation, while almost all the CNN-based methods cannot learn global spatial dependencies very well due to the limitation of convolution operations. The purpose of this paper is to build a new framework to make full use of local and global features from multimodal MR images for improving the performance of brain tumor segmentation. Approach. A new automated segmentation method named nnUnetFormer was proposed based on nnUnet and transformer. It fused transformer modules into the deeper layers of the nnUnet framework to efficiently obtain both local and global features of lesion regions from multimodal MR images. Main results. We evaluated our method on BraTS 2021 dataset by 5-fold cross-validation and achieved excellent performance with Dice similarity coefficient (DSC) 0.936, 0.921 and 0.872, and 95th percentile of Hausdorff distance (HD95) 3.96, 4.57 and 10.45 for the regions of whole tumor (WT), tumor core (TC), and enhancing tumor (ET), respectively, which outperformed recent state-of-the-art methods in terms of both average DSC and average HD95. Besides, ablation experiments showed that fusing transformer into our modified nnUnet framework improves the performance of brain tumor segmentation, especially for the TC region. Moreover, for validating the generalization capacity of our method, we further conducted experiments on FeTS 2021 dataset and achieved satisfactory segmentation performance on 11 unseen institutions with DSC 0.912, 0.872 and 0.759, and HD95 6.16, 8.81 and 38.50 for the regions of WT, TC, and ET, respectively. Significance. Extensive qualitative and quantitative experimental results demonstrated that the proposed method has competitive performance against the state-of-the-art methods, indicating its interest for clinical applications.

  • Supplementary Content
  • Cite Count Icon 1
  • 10.1002/mp.15035
A hybrid feature selection-based brain tumor detection and segmentation in multiparametric magnetic resonance imaging.
  • Oct 13, 2021
  • Medical physics
  • Hao Chen + 5 more

To develop a novel method based on feature selection, combining convolutional neural network (CNN) and ensemble learning (EL), to achieve high accuracy and efficiency of glioma detection and segmentation using multiparametric MRIs. We proposed an evolutionary feature selection-based hybrid approach for glioma detection and segmentation on 4MR sequences (T2-FLAIR, T1, T1Gd, and T2). First, we trained a lightweight CNN to detect glioma and mask the suspected region to process large batch of MRI images. Second, we employed a differential evolution algorithm to search a feature space, which composed of 416-dimensions radiomics features extracted from four sequences of MRIs and 128-dimensions high-order features extracted by the CNN, to generate an optimal feature combination for pixel classification. Finally, we trained an EL classifier using the optimal feature combination to segment whole tumor (WT) and its subregions including non-enhancing tumor (NET), peritumoral edema (ED), and enhancing tumor (ET) in the suspected region. Experiments were carried out on 300 glioma patients from the BraTS2019 dataset using fivefold cross-validation, and the model was independently validated using the rest 35 patients from the same database. The approach achieved a detection accuracy of 98.8% using four MRIs. The Dice coefficients (and standard deviations) were 0.852±0.057, 0.844±0.046, and 0.799±0.053 for segmentation of WT (NET+ET+ED), tumor core (NET+ET), and ET, respectively. The sensitivities and specificities were 0.873±0.074, 0.863±0.072, and 0.852±0.082; the specificities were 0.994±0.005, 0.994±0.005, and 0.995±0.004 for the WT, tumor core, and ET, respectively. The performances and calculation times were compared with the state-of-the-art approaches, our approach yielded a better overall performance with average processing time of 139.5s per set of four sequence MRIs. We demonstrated a robust and computational cost-effective hybrid segmentation approach for glioma and its subregions on multi-sequence MR images. The proposed approach can be used for automated target delineation for glioma patients.

  • Research Article
  • Cite Count Icon 4
  • 10.1002/mp.15026
A hybrid feature selection-based approach for brain tumor detection and automatic segmentation on multiparametric magnetic resonance images.
  • Sep 24, 2021
  • Medical Physics
  • Hao Chen + 5 more

To develop a novel method based on feature selection, combining convolutional neural network (CNN) and ensemble learning (EL), to achieve high accuracy and efficiency of glioma detection and segmentation using multiparametric MRIs. We proposed an evolutionary feature selection-based hybrid approach for glioma detection and segmentation on 4MR sequences (T2-FLAIR, T1, T1Gd, and T2). First, we trained a lightweight CNN to detect glioma and mask the suspected region to process large batch of MRI images. Second, we employed a differential evolution algorithm to search a feature space, which composed of 416-dimension radiomic features extracted from four sequences of MRIs and 128-dimension high-order features extracted by the CNN, to generate an optimal feature combination for pixel classification. Finally, we trained an EL classifier using the optimal feature combination to segment whole tumor (WT) and its subregions including nonenhancing tumor (NET), peritumoral edema (ED), and enhancing tumor (ET) in the suspected region. Experiments were carried out on 300 glioma patients from the BraTS2019 dataset using fivefold cross validation, the model was independently validated using the rest 35 patients from the same database. The approach achieved a detection accuracy of 98.8% using four MRIs. The Dice coefficients (and standard deviations) were 0.852±0.057, 0.844±0.046, and 0.799±0.053 for segmentation of WT (NET+ET+ED), tumor core (NET+ET), and ET, respectively. The sensitivities and specificities were 0.873±0.074, 0.863±0.072, and 0.852±0.082; the specificities were 0.994±0.005, 0.994±0.005, and 0.995±0.004 for the WT, tumor core, and ET, respectively. The performances and calculation times were compared with the state-of-the-art approaches, our approach yielded a better overall performance with average processing time of 139.5s per set of four sequence MRIs. We demonstrated a robust and computational cost-effective hybrid segmentation approach for glioma and its subregions on multi-sequence MR images. The proposed approach can be used for automated target delineation for glioma patients.

  • Research Article
  • Cite Count Icon 21
  • 10.1148/ryai.2020190011
Three-Plane-assembled Deep Learning Segmentation of Gliomas.
  • Mar 1, 2020
  • Radiology: Artificial Intelligence
  • Shaocheng Wu + 3 more

To design a computational method for automatic brain glioma segmentation of multimodal MRI scans with high efficiency and accuracy. The 2018 Multimodal Brain Tumor Segmentation Challenge (BraTS) dataset was used in this study, consisting of routine clinically acquired preoperative multimodal MRI scans. Three subregions of glioma-the necrotic and nonenhancing tumor core, the peritumoral edema, and the contrast-enhancing tumor-were manually labeled by experienced radiologists. Two-dimensional U-Net models were built using a three-plane-assembled approach to segment three subregions individually (three-region model) or to segment only the whole tumor (WT) region (WT-only model). The term three-plane-assembled means that coronal and sagittal images were generated by reformatting the original axial images. The model performance for each case was evaluated in three classes: enhancing tumor (ET), tumor core (TC), and WT. On the internal unseen testing dataset split from the 2018 BraTS training dataset, the proposed models achieved mean Sørensen-Dice scores of 0.80, 0.84, and 0.91, respectively, for ET, TC, and WT. On the BraTS validation dataset, the proposed models achieved mean 95% Hausdorff distances of 3.1 mm, 7.0 mm, and 5.0 mm, respectively, for ET, TC, and WT and mean Sørensen-Dice scores of 0.80, 0.83, and 0.91, respectively, for ET, TC, and WT. On the BraTS testing dataset, the proposed models ranked fourth out of 61 teams. The source code is available at https://github.com/GuanLab/Brain_Glioma. This deep learning method consistently segmented subregions of brain glioma with high accuracy, efficiency, reliability, and generalization ability on screening images from a large population, and it can be efficiently implemented in clinical practice to assist neuro-oncologists or radiologists. Supplemental material is available for this article. © RSNA, 2020.

  • Research Article
  • Cite Count Icon 38
  • 10.1016/j.rineng.2024.101892
3DUV-NetR+: A 3D hybrid semantic architecture using transformers for brain tumor segmentation with MultiModal MR images
  • Feb 9, 2024
  • Results in Engineering
  • Ilyasse Aboussaleh + 4 more

3DUV-NetR+: A 3D hybrid semantic architecture using transformers for brain tumor segmentation with MultiModal MR images

  • Book Chapter
  • Cite Count Icon 14
  • 10.1007/978-3-030-11726-9_20
Multi-scale Masked 3-D U-Net for Brain Tumor Segmentation
  • Jan 1, 2019
  • Yanwu Xu + 5 more

The brain tumor segmentation task aims to classify sub-regions into peritumoral edema, necrotic core, enhancing and non-enhancing tumor core using multimodal MRI scans. This task is very challenging due to its intrinsic high heterogeneity of appearance and shape. Recently, with the development of deep models and computing resources, deep convolutional neural networks have shown their effectiveness on brain tumor segmentation from 3D MRI cans, obtaining the top performance in the MICCAI BraTS challenge 2017. In this paper we further boost the performance of brain tumor segmentation by proposing a multi-scale masked 3D U-Net which captures multi-scale information by stacking multi-scale images as inputs and incorporating a 3-D Atrous Spatial Pyramid Pooling (ASPP) layer. To filter noisy results for tumor core (TC) and enhancing tumor (ET), we train the TC and ET segmentation networks from the bounding box for whole tumor (WT) and TC, respectively. On the BraTS 2018 validation set, our method achieved average Dice scores of 0.8094, 0.9034, 0.8319 for ET, WT and TC, respectively. On the BraTS 2018 test set, our method achieved 0.7690, 0.8711, and 0.7792 dice scores for ET, WT and TC, respectively. Especially, our multi-scale masked 3D network achieved very promising results enhancing tumor (ET), which is hardest to segment due to small scales and irregular shapes.

More from: Biomedical Physics & Engineering Express
  • New
  • Research Article
  • 10.1088/2057-1976/ae0d94
Flexible and transparent microelectrode arrays for simultaneous fMRI and single-spike recording in subcortical networks
  • Nov 7, 2025
  • Biomedical Physics & Engineering Express
  • Scott Greenhorn + 10 more

  • New
  • Research Article
  • 10.1088/2057-1976/ae16ae
Mental health support for schools with wearable biosensor monitoring using deep learning
  • Nov 7, 2025
  • Biomedical Physics & Engineering Express
  • Bin Zhou + 3 more

  • New
  • Research Article
  • 10.1088/2057-1976/ae183b
AEGFN: adaptive evidence-gated fusion network for medical image prediction via conflict compensation and DS evidence theory-driven regularization
  • Nov 6, 2025
  • Biomedical Physics & Engineering Express
  • Xueping Tan + 5 more

  • New
  • Research Article
  • 10.1088/2057-1976/ae1506
A microfluidic liver-like model for antiepileptic drugs hepatotoxicity evaluation
  • Nov 6, 2025
  • Biomedical Physics & Engineering Express
  • Ehsanollah Moradi + 3 more

  • New
  • Research Article
  • 10.1088/2057-1976/ae17d2
Classification of cardiac electrical signals between patients with myocardial infarction and healthy controls by using time-frequency features and 3D convolutional neural networks
  • Nov 6, 2025
  • Biomedical Physics & Engineering Express
  • Muqing Deng + 6 more

  • New
  • Research Article
  • 10.1088/2057-1976/ae183e
Development of a motorized iris collimator for kilovoltage x-ray radiotherapy
  • Nov 6, 2025
  • Biomedical Physics & Engineering Express
  • Olivia Masella + 12 more

  • New
  • Research Article
  • 10.1088/2057-1976/ae183c
Improve deep learning-based reconstruction of optical coherence tomography angiography by siamese U-Net
  • Nov 6, 2025
  • Biomedical Physics & Engineering Express
  • Kewei Zhang + 5 more

  • New
  • Research Article
  • 10.1088/2057-1976/ae1747
Simulations of a new PET scanner for 3D proton therapy quality assurance
  • Nov 6, 2025
  • Biomedical Physics & Engineering Express
  • Ana Catarina Catarina Monteiro Monteiro Magalhães + 4 more

  • New
  • Research Article
  • 10.1088/2057-1976/ae13ff
Torso synthetic CT generation by integrating deep learning and segmentation for FDG-PET/MR attenuation correction
  • Nov 6, 2025
  • Biomedical Physics & Engineering Express
  • Jin Uk Heo + 19 more

  • New
  • Research Article
  • 10.1088/2057-1976/ae13b4
CSCST-Net: a fully sparse-regularized convolutional sparse coding network for low-dose CT denoising
  • Nov 6, 2025
  • Biomedical Physics & Engineering Express
  • Jinxin Luo + 6 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon