Abstract

Retinal Vessel segmentation is an indispensable part of the task of the automatic detection of retinopathy through fundus images, while there are several challenges, such as lots of noise, low distinction between blood vessels and environment, and uneven distribution of thick and thin blood vessels. Deep learning-based methods represented by U-Net performs very well on the task of retinal vessel segmentation. As the attention mechanism has made breakthroughs in many computer vision tasks, it has attracted the attention from the researcher. This paper proposed a kind of U-Net network based on triple attention mechanism—3AU-Net to overcome the problems of retinal vessel segmentation. We follow the framework of U-Net’s full convolution and skip connection, integrating spatial attention mechanism with channel attention mechanism and context attention mechanism. Spatial attention allows the segmentation network to find the blood vessel region that needs attention, thereby suppressing noise. Channel attention can make the expression of features more diverse and highlight the feature channels with key information. The context attention can integrate the context information to make the network to focus on the key pixels. Experimental consequences have indicated that 3AU-Net can greatly improve the results of the segmentation of retinal blood vessels, and this method surpasses other deep learning-based methods in many indicators on the DRIVE and STARE fundus image data sets. On the DRIVE data set, the 3A-UNet model achieved excellent performance on multiple evaluation indicators, with an ACC score of 0.9592, an AUC score of 0.9770, and a sensitivity score of 0.8537.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.