ABSTRACTMorphological changes in retinal vessels are early indicators of cardiovascular and various fundus diseases. However, accurately segmenting thin blood vessels remains a challenge due to the complexity of the vascular structure and the irregularity of pathological features. This paper proposes a dual chain fusion U‐Net (DCFU‐Net) for the precise segmentation of retinal vessels. The network consists of a multi‐level segmentation network and a fusion network. The multi‐level segmentation network is designed with a dual chain architecture to generate segmentation results for both thick and thin vessels simultaneously. The fusion network combines the segmented thin and thick vessels with the original image, facilitating the generation of accurate segmentation outcomes. Notably, traditional convolution structures in the DCFU‐Net are replaced by dynamic snake convolutions (DS‐Conv). DS‐Conv is designed to adaptively focus on slender and tortuous local features, accurately capturing vascular structures. The shared weight residual block, integrating DS‐Conv and residual structures, which is called DS‐Res block. It serves as the backbone of the DCFU‐Net, enhancing feature extraction capabilities, while significantly reducing computational resource consumption. Additionally, this paper rethinks effective components of the Transformer architecture, identifying the inverted residual mobile block (IRMB) as a key element. By extending the DS‐Conv‐based IRMB into effective attention‐based (EAB) blocks, the network mitigates the loss of semantic information, thereby addressing inherent limitations. The DCFU‐Net is evaluated on three publicly available datasets: DRIVE, STARE, and CHASE_DB1. Qualitative and quantitative analyses demonstrate that the segmentation results of DCFU‐Net outperform state‐of‐the‐art methods.
Read full abstract