Abstract

Optic disc (OD) and optic cup (OC) regional parameters are of utmost importance in the early diagnosis of glaucoma. Improving the accuracy of OD/OC segmentation results and parameter extraction in colour fundus images plays a very important role in early glaucoma screening. To improve the accuracy and inference speed of fundus image segmentation, an algorithm for fundus image segmentation based on an attention U-Net with transfer learning is proposed in this paper. First, an attention gate was added between the encoder and decoder of U-Net to focus on the target areas, thus forming the architecture of the attention U-Net. Then, after the network had been trained on the DRIONS-DB dataset to partially obtain the weights of the encoder, it was trained on the Drishti-GS dataset to further modify the weights. Finally, the trained attention U-Net model incorporating transfer learning was used to segment fundus images. OD/OC extraction using this method shows obvious advantages in model parameter quantity and inference time compared with existing algorithms, the parameter quantity is much smaller than that of existing algorithms, and the model inference time is 0.33 s, representing a reduction of more than 50%. The proposed method can be applied to a fundus image dataset with only a small number of labels. Whilst offering fast OD/OC segmentation, it also guarantees a relatively high segmentation accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.