Abstract

Magnetic resonance images of brain tumors are routinely used in neuro-oncology clinics for diagnosis, treatment planning, and post-treatment tumor surveillance. Currently, physicians spend considerable time manually delineating different structures of the brain. Spatial and structural variations, as well as intensity inhomogeneity across images, make the problem of computer-assisted segmentation very challenging. We propose a new image segmentation framework for tumor delineation that benefits from two state-of-the-art machine learning architectures in computer vision, i.e., Inception modules and U-Net image segmentation architecture. Furthermore, our framework includes two learning regimes, i.e., learning to segment intra-tumoral structures (necrotic and non-enhancing tumor core, peritumoral edema, and enhancing tumor) or learning to segment glioma sub-regions (whole tumor, tumor core, and enhancing tumor). These learning regimes are incorporated into a newly proposed loss function which is based on the Dice similarity coefficient (DSC). In our experiments, we quantified the impact of introducing the Inception modules in the U-Net architecture, as well as, changing the objective function for the learning algorithm from segmenting the intra-tumoral structures to glioma sub-regions. We found that incorporating Inception modules significantly improved the segmentation performance (p < 0.001) for all glioma sub-regions. Moreover, in architectures with Inception modules, the models trained with the learning objective of segmenting the intra-tumoral structures outperformed the models trained with the objective of segmenting the glioma sub-regions for the whole tumor (p < 0.001). The improved performance is linked to multiscale features extracted by newly introduced Inception module and the modified loss function based on the DSC.

Highlights

  • In recent years, there has been a proliferation of machine and especially deep learning techniques in the medical imaging field (Litjens et al, 2017)

  • We explored two different learning objectives: (1) learning to segment glioma sub-regions (WT, tumor core (TC), and enhancing tumor (ET)), and (2) learning to segment intra-tumoral structures

  • Our framework resulted into four different model variations, i.e., (1) a U-Net with learning objective of intratumoral structures, (2) U-Net with glioma sub-regions, (3) UNet with Inception module and intra-tumoral structures, and (4) U-Net with Inception module and learning objective of glioma sub-regions

Read more

Summary

Introduction

There has been a proliferation of machine and especially deep learning techniques in the medical imaging field (Litjens et al, 2017). A convolutional neural network (CNN) is designed to extract features from two-dimensional grid data, e.g., images, through a series of Inception Modules Enhance Tumor Segmentation learned filters and non-linear activation functions. The set of features learned through this process can be used to perform various downstream tasks such as image classification, object detection, and semantic or instance segmentation (LeCun et al, 2015). U-Net (Ronneberger et al, 2015) which is an endto-end fully convolutional network (FCN) (Long et al, 2015) was proposed for semantic segmentation of various structures in medical images. U-Net architecture is built using a contracting path, which captures high-resolution, contextual features while downsampling at each layer, and an expanding path, which increases the resolution of the output through upsampling at each layer (Ronneberger et al, 2015). Architectural variations and extensions of the U-Net algorithm, such as 3D U-Net (Kamnitsas et al, 2017; Sandur et al, 2018), H-DenseUNet (Li et al, 2018), RIC-UNet (Zeng et al, 2019), and Bayesian U-Net (Orlando et al, 2019) have been developed to tackle different segmentation problems in the medical imaging community

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.