Abstract
Vessel segmentation contributes to precise diagnosis of fundus diseases. However, due to the existence of micro vessels and class imbalance, it remains a challenging task to accurately segment vessels from fundus images. To address this issue, a coarse-to-fine framework, namely cascade self-attention U-shaped network (CSAUNet), is proposed in this paper. Firstly, a self-attention U-shaped network (SAUNet) is pre-trained to roughly locate the fundus vessels. Afterwards, the pretrained SAUNet is cascaded with a residual self-attention UNet (Res-SAUNet), and the rough segmentation results of the pretrained SAUNet are regarded as the spatial attention maps to force the Res-SAUNet to focus on significant regions. In addition, self-attention modules (SAMs) are introduced in both the SAUNet and Res-SAUNet to effectively model the long-range dependencies across fundus vessels. The experimental results show that the accuracy, sensitivity, specificity and dice similarity coefficient (DSC) of CSAUNet on three public databases, including DRIVE, STARE and synthetic images (SI), reach 96.76%/83.4%/98.1%/82.07%, 97.28%/83.04%/98.62%/84% and 0.9915/0.9539/0.9962/0.9612, respectively. In addition, the coarse-to-fine framework can improve the DSC and sensitivity on DRIVE, STARE and SI by 1.24%/2.37%, 1.69%/1.04% and 0.8%/0.72%, respectively, and the incorporation of SAMs can further improve the DSC on DRIVE, STARE and SI by 0.37%, 0.7% and 0.49%, respectively, which indicates that CSAUNet can improve the segmentation performance for fundus vessels.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.