Extracting vessel morphology from fundus images is pivotal in acquiring pathological insights and enabling early diagnosis of retinal disorders. Manual segmentation of retinal vessels requires a high degree of expertise and is notably time-intensive. Although existing deep learning techniques for retinal vessel segmentation predominantly hinge on U-shaped convolutional neural networks, significant headway has been made, complexities persist in delineating faint, low-contrast vessels amidst noisy backgrounds. To confront these challenges, we propose an innovative U-shaped convolutional neural network fortified with oriented priors, labeled as the Receptive Field Aggregating Gabor Enhance Network (RAGE-Net). We revamp the conventional U-shaped convolutional network with a foundation in Gabor wavelet and Gabor convolutional network, introducing a Gabor Matching Enhance Architecture (GMEA) amalgamated into the U-shaped convolutional network. This architecture comprises two distinct modules. Initially, a Dual-scale Gabor Enhance Module (DGEB) is introduced to bolster vessel continuity and effectively fortify delicate vessels by integrating oriented feature enhancement through Gabor convolution. Subsequently, a Receptive Field Pyramid Module (RPM) is proposed to supplant the escalation of scale count in the Gabor filter bank for vessel alignment, also serving as feature fusion to enhance the network's comprehensive vessel discernment. In comparison to U-Net, our model boasts fewer parameters and surpasses in terms of sensitivity, accuracy, and F1 score on the DRIVE dataset by 3.58%, 0.34%, and 2.29%, respectively. Our model showcases stellar performance across three public datasets: DRIVE, STARE, and CHASE_DB1, with sensitivity values of 0.8172, 0.8126, and 0.8540 and accuracy figures of 0.9708, 0.9725, and 0.9757, respectively. The analysis on three datasets reveals that our model demonstrates distinct strengths in AUC and Se as a result of integrating GMEA, which comprises RPM and DGEB, into U-shaped networks. However, it does not reduce overall accuracy to improve the model's ability to perceive weak contrast and small blood vessels. While maintaining superior Acc, our model also demonstrates advancements in Sp and F1 score, indicating a balanced progression in multiple evaluation metrics.
Read full abstract