Abstract

Automated liver tumor segmentation from computed tomography (CT) images is a necessary prerequisite in the interventions of hepatic abnormalities and surgery planning. However, accurate liver tumor segmentation remains challenging due to the large variability of tumor sizes and inhomogeneous texture. Recent advances based on fully convolutional network (FCN) for medical image segmentation drew on the success of learning discriminative pyramid features. In this paper, we propose a decoupled pyramid correlation network (DPC-Net) that exploits attention mechanisms to fully leverage both low- and high-level features embedded in FCN to segment livertumor. We first design a powerful pyramid feature encoder (PFE) to extract multilevel features from input images. Then we decouple the characteristics of features concerning spatial dimension (i.e., height, width, depth) and semantic dimension (i.e., channel). On top of that, we present two types of attention modules, spatial correlation (SpaCor) and semantic correlation (SemCor) modules, to recursively measure the correlation of multilevel features. The former selectively emphasizes global semantic information in low-level features with the guidance of high-level ones. The latter adaptively enhance spatial details in high-level features with the guidance of low-levelones. We evaluate the DPC-Net on MICCAI 2017 LiTS Liver Tumor Segmentation (LiTS) challenge data set. Dice similarity coefficient (DSC) and average symmetric surface distance (ASSD) are employed for evaluation. The proposed method obtains a DSC of 76.4% and an ASSD of 0.838 mm for liver tumor segmentation, outperforming the state-of-the-art methods. It also achieves a competitive result with a DSC of 96.0% and an ASSD of 1.636 mm for liversegmentation. The experimental results show promising performance of DPC-Net for liver and tumor segmentation from CT images. Furthermore, the proposed SemCor and SpaCor can effectively model the multilevel correlation from both semantic and spatial dimensions. The proposed attention modules are lightweight and can be easily extended to other multilevel methods in an end-to-endmanner.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.