Abstract

Automatic retinal vessel segmentation plays a crucial role in the diagnosis and assessment of various ophthalmologic diseases. Currently, the primary retinal vessel segmentation algorithms are based on the encoder-decoder structure. However, these U-Net analogs suffer from the loss of both spatial and semantic information, caused by continuous up-sampling operations in the decoder structure. In this paper, we rethink the above problem and build a novel deep neural network for retinal vessel segmentation, called FSE-Net. Specifically, to address the issue of feature information loss and enhance the performance of retinal vessel segmentation, we eliminate the decoder structure. In particular, we introduced a multi-head feature fusion block (MFF) as a substitute for the continuous up-sampling operation. Additionally, the encoder stage of FSE-Net incorporates a residual feature separable block (RFSB) to further refine and distill features, thereby enhancing the capability of feature extraction. Subsequently, we employ a residual atrous spatial feature aggregate module (RASF) to expand the network's receptive field by incorporating multi-scale feature information. We conducted experiments on five widely recognized databases for retinal vessel segmentation, namely DRIVE, CHASEDB1, STARE, IOSTAR, and LES-AV. The results demonstrate that our proposed FSE-Net outperforms state-of-the-art approaches in terms of segmentation performance. Moreover, we demonstrate the feasibility of achieving superior segmentation performance without employing the traditional U-Net analog network structure.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.