Abstract

Accurate retinal vessel segmentation is very challenging. Recently, the deep learning based method has greatly improved performance. However, the non-vascular structures usually harm the performance and some low contrast small vessels are hard to be detected after several down-sampling operations. To solve these problems, we design a deep fusion network (DF-Net) including multiscale fusion, feature fusion and classifier fusion for multi-source vessel image segmentation. The multiscale fusion module allows the network to detect blood vessels with different scales. The feature fusion module fuses deep features with vessel responses extracted from a Frangi filter to obtain a compact yet domain invariant feature representation. The classifier fusion module provides the network more supervision. DF-Net also predicts the parameter of the Frangi filter to avoid manually picking the best parameters. The learned Frangi filter enhances the feature map of the multiscale network and restores the edge information loss caused by down-sampling operations. The proposed end-to-end network is easy to train and the inference time for one image is 41ms on a GPU. The model outperforms state-of-the-art methods and achieves the accuracy of 96.14%, 97.04%, 98.02% from three publicly available fundus image datasets DRIVE, STARE, CHASEDB1, respectively. The code is available at https://github.com/y406539259/DF-Net.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call