Abstract

Retinal vessel segmentation is a rapid method for the diagnosis of ocular diseases. By applying deep learning-based techniques to retinal images, more structural information about retinal vessels can be extracted to accurately assess the extent and classification of ocular diseases. However, current segmentation networks typically consist of a single network, making them vulnerable to noise, decreased image quality, and other interfering factors, resulting in erroneous segmentation outcomes. Additionally, the traditional skip connection mechanism introduces noise from the encoder features into the decoder, which reduces the refinement of the final segmentation result. A three-stage fundus vessel segmentation model called EWSNet is proposed to address these issues. The EWSNet utilizes two different models to extract and reconstruct coarse and fine blood vessels, respectively. The reconstructed results are fed into the refinement network to rebuild the edge portion of the retinal vessels, achieving higher segmentation performance. Within the framework of EWSNet, a wavelet-transformation-based sampling module is used to effectively suppress high-frequency noise in the features while using low-frequency features to reconstruct vascular information. Besides, a new edge loss function (E-BCE Loss) is designed to encourage more precise predictions at the segmentation edges. Experimental results on CHASE_DB1, HRF, STARE, and a newly collected ultra-wide-angle fundus dataset (UWF) demonstrate that EWSNet has more robust segmentation performance in the microvascular region compared to the current mainstream models. The code is available at: https://github.com/xuecheng990531/EWSNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call