Abstract

Accurate segmentation of retinal fundus vessel images is vital to clinical diagnosis. Due to the intricate vascular morphology, high noise and low contrast of fundus vessel images, retinal fundus vessel segmentation is still a challenging task, especially for thin vessel segmentation. In recent years, on account of strong context feature extraction ability of deep learning, it has shown a remarkable performance in the automatic segmentation of retinal fundus vessels. However, it still exhibits certain limitations, such as information loss on micro objects or details, inadequate treatment of local features, etc. Faced with these challenging factors, we present a new multi-scale global attention network (MGA-Net). To realize effective feature representation, a dense attention U-Net network is proposed. Meanwhile, we design a global context attention (GCA) block to realize multi-scale feature fusion, allowing the global features from the deep network layers to flow to the shallow network layers. Further, aimed at retinal fundus vessel segmentation task again the class imbalance issue, the AG block is also introduced. Related experiments are conducted on CHASE_DB1, DRIVE and STARE datasets to show the effectiveness of proposed segmentation model. The experimental results demonstrate the robustness of the proposed method with F1 exceeding 82% on all three datasets and effectively improve the segmentation performance of thin vessels. The source code of proposed MGA-Net is available at https://github.com/gegao310/workspace.git.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call