Abstract

Retinal fundus vessels contain rich geometric features, including both thick and thin vessels, which is particularly important for accurate clinical diagnosis of cardiovascular diseases. Nowadays, deep convolution neural networks (DCNNs) have shown remarkable performance on fundus vessel segmentation combined with effective contextual feature expression ability, especially U-Net network and its variants. However, due to multiple convolution and pooling operations, existing methods using DCNNs will cause the information loss of details or micro objects, which impacts the accuracy of thin vascular detection and cause poor final segmentation performance. To address this problem, a multi-task symmetric network, called GDF-Net, is proposed for accurate retinal fundus image segmentation, which is composed by two symmetrical segmentation networks (global segmentation network branch and detail enhancement network branch) and a fusion network branch. To address the information loss issue, two symmetrical segmentation networks are proposed to extract global contextual features and detail features respectively. To strength both advantages of two symmetric segmentation networks, this paper presented a fusion network to perform feature integration to improve the segmentation accuracy for retinal fundus vessels. To better validate the usefulness and excellence of the GDF-Net, experiments demonstrate that the GDF-Net has achieved a competitive performance on retinal vessel segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call