Abstract

Blood vessel segmentation is a crucial stage in extracting morphological characteristics of vessels for the clinical diagnosis of fundus and coronary artery disease. However, traditional convolutional neural networks (CNNs) are confined to learning local vessel features, making it challenging to capture the graph structural information and fail to perceive the global context of vessels. Therefore, we propose a novel graph neural network-guided vision transformer enhanced network (G2ViT) for vessel segmentation. G2ViT skillfully orchestrates the Convolutional Neural Network, Graph Neural Network, and Vision Transformer to enhance comprehension of the entire graphical structure of blood vessels. To achieve deeper insights into the global graph structure and higher-level global context cognizance, we investigate a graph neural network-guided vision transformer module. This module constructs graph-structured representation in an unprecedented manner using the high-level features extracted by CNNs for graph reasoning. To increase the receptive field while ensuring minimal loss of edge information, G2ViT introduces a multi-scale edge feature attention module (MEFA), leveraging dilated convolutions with different dilation rates and the Sobel edge detection algorithm to obtain multi-scale edge information of vessels. To avoid critical information loss during upsampling and downsampling, we design a multi-level feature fusion module (MLF2) to fuse complementary information between coarse and fine features. Experiments on retinal vessel datasets (DRIVE, STARE, CHASE_DB1, and HRF) and coronary angiography datasets (DCA1 and CHUAC) indicate that the G2ViT excels in robustness, generality, and applicability. Furthermore, it has acceptable inference time and computational complexity and presents a new solution for blood vessel segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call