Abstract

Automatic segmentation of brain tumors from multimodal MR images plays an important role in treatment decision and operation planning. Thus, we propose a novel 3D multi-scale Ghost convolution neural network with auxiliary MetaFormer decoding path (GMetaNet). By combining the local modeling of CNN and the long-range representation of Transformer, we can achieve efficient semantic information extraction. First, based on Ghost module, three novel modules are proposed, namely the lightweight Ghost spatial pyramid (GSP) module, the Ghost self-attention (GSA) module and the dense residual Ghost (DRG) module. Second, the GSP module learns features under different receptive fields at a low computational cost to improve the multi-scale representation. The proposed GSA module enables the model to capture long-range dependencies. As a local decoder, the DRG module is used to refine information and avoid degradation. Moreover, a global decoder including MetaFormer is designed, which can achieve the effective aggregation of local and global features. Finally, deep supervision is introduced to ensemble three outputs and improve the convergence. Experiments are conducted on BraTS datasets to evaluate the proposed model. The dice scores are 90.1, 84.1 and 82.0 for whole tumor, tumor core and enhancing tumor on the BraTS 2018 validation set and 90.2, 82.5 and 78.4 on the BraTS 2019 validation set. Network parameters and FLOPs are 6.1 M and 69.2G, respectively. Experimental results show that GMetaNet is comparable to the state-of-the-art methods in segmentation performance. Moreover, the proposed model has better potential in computational efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call