Abstract

The pathological changes of the retina are closely related to many human diseases, such as hypertension and diabetes. In clinical medicine, the pathological conditions of retinal blood vessels are usually used to diagnose a variety of related diseases in the human body. Retinal blood vessel segmentation is the basis of such medical diagnosis and plays an important role in the screening and diagnosis of related diseases. However, the current retinal vessel segmentation methods have low accuracy and poor connectivity in the blood vessel segmentation. In this paper, we propose a new segmentation algorithm based on a multi-scale attention with a residual mechanism D-Mnet (Deformable convolutional M-shaped Network), combined with an improved PCNN (Pulse-Coupled Neural Network) model. The network in the proposed algorithm is mainly based on the encoder-decoder network structure, and introduces a deformable convolutional model and a multi-scale attention module with residual mechanism to improve the accuracy of the capillary segmentation and the connectivity of general blood vessel segmentation. At the same time, our network combines an improved PCNN model, is order to bring together the advantages of supervised and unsupervised learning to improve the performance of retinal blood vessel segmentation. We use fundus images from four public databases, DRIVE, STARE, CHASE_DB1 and HRF, to conduct comparative verification of our algorithm. Experimental results of our algorithm show that the detection accuracy of the retinal blood vessel segmentation from the four databases reach 96.83%, 97.32%, 97.14% and 96.68% respectively. The segmentation performance of the algorithm in this paper is better than that of most existing algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.