Abstract
Vessel segmentation in fundus images is crucial for diagnosing eye diseases. The rapid development of deep learning has greatly improved segmentation accuracy. However, the scale of the retinal blood-vessel structure varies greatly, and there is a lot of noise unrelated to blood-vessel segmentation in fundus images, which increases the complexity and difficulty of the segmentation algorithm. Comprehensive consideration of factors like scale variation and noise suppression is imperative to enhance segmentation accuracy and stability. Therefore, we propose a retinal vessel segmentation method based on multi-scale feature extraction and decoupled representation. Specifically, we design a multi-scale feature extraction module at the skip connections, utilizing dilated convolutions to capture multi-scale features and further emphasizing crucial information through channel attention modules. Additionally, to separate useful spatial information from redundant information and enhance segmentation performance, we introduce an image reconstruction branch to assist in the segmentation task. The specific approach involves using a disentangled representation method to decouple the image into content and style, utilizing the content part for segmentation tasks. We conducted experiments on the DRIVE, STARE, and CHASE_DB1 datasets, and the results showed that our method outperformed others, achieving the highest accuracy across all three datasets (DRIVE:0.9690, CHASE_DB1:0.9757, and STARE:0.9765).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.