Abstract

The task of multi-class vessel segmentation on retinal images is the basis for the arteriovenous quantitative analysis, and plays an important role in the diagnosis and treatment of cerebrovascular diseases. Due to the intricate details and intertwining of the retinal vessels, traditional feature learning networks based on single-level resolution images are prone to the troubles from arteriovenous confusion and vascular edge discontinuity. To this end, we develop a paradigm of multi-level image resolution joint learning. This scheme overcomes the limitation of the methods depending on single-level image resolution on feature modeling. Specifically, we designed a cross-scale feature fusion network with a dual-branch structure that integrates global and local perspectives. This approach enables the extraction of retinal image features across multiple resolutions, effectively compensating for the vascular feature gaps inherent in single-resolution network models. This framework not only corrects the intra-segment misclassification, but also improves continuity by supplementing the details of vascular edge. Furthermore, the cross-scale fusion process of the network at multiple stages is conducive to its optimization and enhances the collaborative learning ability of dual-branch. Meanwhile, we use the generative adversarial structure as the backbone to supervise and constrain the aforementioned feature fusion results. Finally, extensive experiments are conducted on three publicly available datasets, DRIVE-AV, LES-AV, and HRF-AV. It is shown that the proposed scheme outperforms the current state-of-the-art methods significantly. The source code is available at https://github.com/Tang9867/Multi-Resolution-Learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call