Abstract

Liver tumor segmentation based on medical imaging is playing an increasingly important role in liver tumor research and individualized therapeutic decision-making. However, it remains a challenging in terms of the accuracy of automatic segmentation of liver tumors. Therefore, we aimed to develop a novel deep neural network for improving the results from the automatic segmentation of liver tumors. This paper proposes the attention-guided context asymmetric fusion network (AGCAF-Net), combining attention guidance and fusion context modules on the basis of a residual neural network for the automatic segmentation of liver tumors. According to the attention-guided context block (AGCB), the feature map is first divided into multiple small blocks, the local correlation between features is calculated, and then the global nonlocal fusion module (GNFM) is used to obtain the global information between pixels. Additionally, the context pyramid module (CPM) and asymmetric semantic fusion module (AFM) are used to obtain multiscale features and resolve the feature mismatch during feature fusion, respectively. Finally, we used the liver tumor segmentation benchmark (LiTS) dataset to verify the efficiency of our designed network. Our results showed that AGCAF-Net with AFM and CPM is effective in improving the accuracy of liver tumor segmentation, with the Dice coefficient increasing from 82.5% to 84.1%. The segmentation results of liver tumors by AGCAF-Net were superior to those of several state-of-the-art U-net methods, with a Dice coefficient of 84.1%, a sensitivity of 91.7%, and an average symmetric surface distance of 3.52. AGCAF-Net can obtain better matched and accurate segmentation in liver tumor segmentation, thus effectively improving the accuracy of liver tumor segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call