Abstract

Precise segmentation of intestinal wall vessels is vital to colonic perforation prevention. However, there are interferences such as gastric juice in the vessel image of the intestinal wall, especially vessels and the mucosal folds are difficult to distinguish, which easily lead to mis-segmentation. In addition, the insufficient feature extraction of intricate vessel structures may leave out information of tiny vessels that result in rupture. To overcome these challenges, an effective network is proposed for segmentation of intestinal wall vessels. A global context attention network (GCA-Net) that employs a multi-scale fusion attention (MFA) module is proposed to adaptively integrate local and global context information to improve the distinguishability of mucosal folds and vessels, more importantly, the ability to capture tiny vessels. Also, a parallel decoder is used to introduce a contour loss function to solve the blurry and noisy blood vessel boundaries. Extensive experimental results demonstrate the superiority of the GCA-Net, with accuracy of 94.84%, specificity of 97.89%, F1-score of 73.80%, AUC of 96.30%, and MeanIOU of 76.46% in fivefold cross-validation, exceeding the comparison methods. In addition, the public dataset DRIVE is used to verify the potential of GCA-Net in retinal vessel image segmentation. A novel network for segmentation of intestinal wall vessels is developed, which can suppress interferences in intestinal wall vessel images, improve the discernibility of blood vessels and mucosal folds, enhance vessel boundaries, and capture tiny vessels. Comprehensive experiments prove that the proposed GCA-Net can accurately segment the intestinal wall vessels.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.