Deep learning plays a pivotal role in retinal blood vessel segmentation for medical diagnosis. Despite their significant efficacy, these techniques face two major challenges. Firstly, they often neglect the severe class imbalance in fundus images, where thin vessels in the foreground are proportionally minimal. Secondly, they are susceptible to poor image quality and blurred vessel edges, resulting in discontinuities or breaks in vascular structures. In response, this paper proposes the Skeleton-guided Multi-scale Dual-coordinate Attention Aggregation (SMDAA) network for retinal vessel segmentation. SMDAA comprises three innovative modules: Dual-coordinate Attention (DCA), Unbalanced Pixel Amplifier (UPA), and Vessel Skeleton Guidance (VSG). DCA, integrating Multi-scale Coordinate Feature Aggregation (MCFA) and Scale Coordinate Attention Decoding (SCAD), meticulously analyzes vessel structures across various scales, adept at capturing intricate details, thereby significantly enhancing segmentation accuracy. To address class imbalance, we introduce UPA, dynamically allocating more attention to misclassified pixels, ensuring precise extraction of thin and small blood vessels. Moreover, to preserve vessel structure continuity, we integrate vessel anatomy and develop the VSG module to connect fragmented vessel segments. Additionally, a Feature-level Contrast (FCL) loss is introduced to capture subtle nuances within the same category, enhancing the fidelity of retinal blood vessel segmentation. Extensive experiments on three public datasets (DRIVE, STARE, and CHASE_DB1) demonstrate superior performance in comparison to current methods. The code is available at https://github.com/wangwxr/SMDAA_NET.
Read full abstract