Abstract
Accurate segmentation of retinal vessels is a basic step in diabetic retinopathy (DR) detection. Most methods based on deep convolutional neural network (DCNN) have small receptive fields, and hence they are unable to capture global context information of larger regions, with difficult to identify pathological. The final segmented retina vessels contain more noise with low classification accuracy. Therefore, in this paper, we propose a DCNN structure named as D-Net. In the encoding phase, we reduced the loss of feature information by reducing the downsampling factor, which reduced the difficulty of tiny thin vessels segmentation. We use the combined dilated convolution to effectively enlarge the receptive field of the network and alleviate the “grid problem” that exists in the standard dilated convolution. In the proposed multi-scale information fusion module (MSIF), parallel convolution layers with different dilation rates are used, so that the model can obtain more dense feature information and better capture retinal vessel information of different sizes. In the decoding module, the skip layer connection is used to propagate context information to higher resolution layers, so as to prevent low-level information from passing the entire network structure. Finally, our method was verified on DRIVE, STARE, and CHASE dataset. The experimental results show that our network structure outperforms some state-of-art method, such as N 4 -fields, U-Net, and DRIU in terms of accuracy, sensitivity, specificity, and ${AUC_{ROC}}$ . Particularly, D-Net outperforms U-Net by 1.04 %, 1.23 %, and 2.79% in DRIVE, STARE, and CHASE dataset, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.