Abstract

The presence of high blood sugar levels damages blood vessels and causes an eye condition called diabetic retinopathy. The ophthalmologist can detect this disease by looking at the variations in retinal blood vasculature. Manual segmentation of vessels requires highly skilled specialists, and not possible for many patients to be done quickly in their daily routine. For these reasons, it is of great importance to isolate retinal vessels precisely, quickly, and accurately. The difficulty distinguishing the retinal vessels from the background, and the small number of samples in the databases make this segmentation problem difficult. In this study, we propose a novel network called Block Feature Map Distorted Switchable Normalization U-net with Global Context Informative Convolutional Block Attention Module (BFMD SN U-net with GCI- CBAM). We improve the traditional Fully Convolutional Segmentation Networks in multiple aspects with the proposed model; The model converges in earlier epochs, adapts more flexibly to different data, is more robust against overfitting, and gets better feature refinement at different dilation rates to cope with different sizes of retinal vessels. We evaluate the proposed network on two reference retinal datasets, DRIVE and CHASE DB1, and achieve state-of-the-art performances with 97.00 % accuracy and 98.71 % AUC in DRIVE and 97.62 % accuracy and 99.11 % AUC on CHASE DB1 databases. Additionally, the convergence step of the model is reduced and it has fewer parameters than the baseline U-net. In summary, the proposed model surpasses the U-net-based approaches used for retinal vessel separation in the literature.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.