Abstract
The diagnosis of glaucoma primarily relies on the accurate segmentation of the optic disc (OD) and optic cup (OC). However, OC segmentation remains challenging due to significant individual differences. This study proposes a threshold-based OD extraction method and multi-scale residual U-Net with context attention mechanism and adaptive weight fusion strategy (MRCAAU-Net) to segment OC and OD jointly. Firstly, based on the intensity difference between the OD and the background, an improved thresholding criterion combined with template matching technique is used to effectively detect the OD region, providing accurate input images for network training. Secondly, considering the semantic gap issue during the encoder-decoder feature fusion in the segmentation network, this paper develops an adaptive weight fusion module to guide the effective connections of encoder-decoder features. Additionally, the 1D convolution-based attention mechanism has stronger local perceptiveness. This paper integrate it into various modules, combining multi-scale methods to enhance the segmentation performance of OC by incorporating both global and local features. Finally, the network is guided to learn image segmentation through a hybrid loss function that ignores the background. We conducted extensive experiments on three public datasets, and obtained more accurate segmentation outputs compared to other state-of-the-art methods, particularly in the OC segmentation, where the segmentation accuracy is very close to that of the OD. The proposed joint segmentation network effectively improves and balances the segmentation performance of two objectives, which can greatly assist in large-scale screening for glaucoma.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.