Abstract

AbstractGliomas segmentation is a critical and challenging task in surgery and treatment, and it is also the basis for subsequent evaluation of gliomas. Magnetic resonance imaging is extensively employed in diagnosing brain and nervous system abnormalities. However, brain tumor segmentation remains a challenging task, because differentiating brain tumors from normal tissues is difficult, tumor boundaries are often ambiguous and there is a high degree of variability in the shape, location, and extent of the patient. It is therefore desired to devise effective image segmentation architectures. In the past few decades, many algorithms for automatic segmentation of brain tumors have been proposed. Methods based on deep learning have achieved favorable performance for brain tumor segmentation. In this article, we propose a Multi‐Scale 3D U‐Nets architecture, which uses several U‐net blocks to capture long‐distance spatial information at different resolutions. We upsample feature maps at different resolutions to extract and utilize sufficient features, and we hypothesize that semantically similar features are easier to learn and process. In order to reduce the computational cost, we use 3D depthwise separable convolution instead of some standard 3D convolution. On BraTS 2015 testing set, we obtained dice scores of 0.85, 0.72, and 0.61 for the whole tumor, tumor core, and enhancing tumor, respectively. Our segmentation performance was competitive compared to other state‐of‐the‐art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call