Abstract

Liver segmentation is a critical step in liver cancer diagnosis and surgical planning. The U-Net's architecture is one of the most efficient deep networks for medical image segmentation. However, the continuous downsampling operators in U-Net causes the loss of spatial information. To solve these problems, we propose a global context and hybrid attention network, called GCHA-Net, to adaptive capture the structural and detailed features. To capture the global features, a global attention module (GAM) is designed to model the channel and positional dimensions of the interdependencies. To capture the local features, a feature aggregation module (FAM) is designed, where a local attention module (LAM) is proposed to capture the spatial information. LAM can make our model focus on the local liver regions and suppress irrelevant information. The experimental results on the dataset LiTS2017 show that the dice per case (DPC) value and dice global (DG) value of liver were 96.5% and 96.9%, respectively. Compared with the state-of-the-art models, our model has superior performance in liver segmentation. Meanwhile, we test the experiment results on the 3Dircadb dataset, and it shows our model can obtain the highest accuracy compared with the closely related models. From these results, it can been seen that the proposed model can effectively capture the global context information and build the correlation between different convolutional layers. The code is available at the website: https://github.com/HuaxiangLiu/GCAU-Net.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.