Abstract

Medical studies have shown that the condition of human retinal vessels may reveal the physiological structure of the relationship between age-related macular degeneration, glaucoma, atherosclerosis, cataracts, diabetic retinopathy, and other ophthalmic diseases and systemic diseases, and their abnormal changes often serve as a diagnostic basis for the severity of the condition. In this paper, we design and implement a deep learning-based algorithm for automatic segmentation of retinal vessel (CSP_UNet). It mainly adopts a U-shaped structure composed of an encoder and a decoder and utilizes a cross-stage local connectivity mechanism, attention mechanism, and multi-scale fusion, which can obtain better segmentation results with limited data set capacity. The experimental results show that compared with several existing classical algorithms, the proposed algorithm has the highest blood vessel intersection ratio on the dataset composed of four retinal fundus images, reaching 0.6674. Then, based on the CSP_UNet and introducing hard parameter sharing in multi-task learning, we innovatively propose a combined diagnosis algorithm vessel segmentation and diabetic retinopathy for retinal images (MTNet). The experiments show that the diagnostic accuracy of the MTNet algorithm is higher than that of the single task, with 0.4% higher vessel segmentation IoU and 5.2% higher diagnostic accuracy of diabetic retinopathy classification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.