Abstract

A retinal blood vessel segmentation algorithm fusing the attention mechanism is proposed to address the problems of insufficient accuracy and incorrect segmentation of minutiae in existing retinal blood vessel segmentation algorithms. The network structure used in this algorithm is Laddernet (LCT-UNet for short), which fuses the Convolutional Block Attention Module (CBAM) and Transformer, i.e., the residual module of Laddernet, which shares the weights, is structurally optimised to decrease the parameter count; After the residual module, CBAM is incorporated to adaptively adjust the weights of channels and spatial features to enhance the feature representation; Transformer with global self-attention mechanism is introduced at the bottom of the network to overcome the shortcomings of UNet's inability to model long-range relationships and spatial dependencies, and gradually refine the features so that the segmentation results have finer boundary and detail information. The model validity was verified on the public datasets DRIVE and CHASEDB1, and the experimental results showed that the ACC of LCT-UNet was 95.81% and 96.88%, SE was 79.74% and 81.12%, SP was 98.15% and 98.83%, and AUC was 98.22% and 98.99%, and the F1 values were 82.88% and 85.14%, respectively, and the comprehensive segmentation performance of the LCT-UNet network showed significant enhancement over the algorithms such as U-Net, R2UNet, and Laddernet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call