Abstract

Automatic segmentation of the organ’s tumor and lesion on biomedical imaging is an essential initiative towards clinical study, treatment planning and digital biomedical research. However, precise tumor segmentation on medical imaging is still an open challenge due to the presence of noise in the imaging sequence, the similar tumor pixel intensity with its neighboring tissues, and heterogeneity among human anatomy. Although most of the state-of-the-art algorithms are architecturally dependent on deep convolution networks (DCNs), like 2D and 3D U-Net, they act as a foundation for many biomedical image segmentation. However, 2D DCNs are incompetent to leverage context information from inter-slice completely. At the same time, 3D DCNs can accumulate inter-slice contextual information over the sizeable receptive texture in the organ, but it consumes a considerable amount of GPU memory and burdens with the high execution cost. In order to achieve a promising solution, we proposed a segmentation network called Cascaded Atrous Dual-Attention U-Net. First, our network structure concatenates features from 3D liver segmentation to 2D tumor segmentation for preserving volumetric information as well as enlarging resolution with segmentation accuracy. Second, we embedded dual attention gate in each skip connection layer of the 2D segmentation model, which determines to concentrate on certain discriminative features in order to find tumor segmentation in different organs. Finally, we adopted atrous encoder which extracts wider context features from computed tomography as compared to normal encoder. Furthermore, we tested the proposed method on four different datasets, including liver tumor segmentation benchmark (LiTS), MSD liver, pancreas tumor segmentation and Kidney tumor segmentation (KiTS). Experimental results were compared with the other state-of-the-art segmentation methods. Our proposed approach performs remarkably better than existing methods with around $4 \sim 6\%$ improvement on each benchmark.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.