Abstract

Drusen is considered as the landmark for diagnosis of AMD and important risk factor for the development of AMD. Therefore, accurate segmentation of drusen in retinal OCT images is crucial for early diagnosis of AMD. However, drusen segmentation in retinal OCT images is still very challenging due to the large variations in size and shape of drusen, blurred boundaries, and speckle noise interference. Moreover, the lack of OCT dataset with pixel-level annotation is also a vital factor hindering the improvement of drusen segmentation accuracy. To solve these problems, a novel multi-scale transformer global attention network (MsTGANet) is proposed for drusen segmentation in retinal OCT images. In MsTGANet, which is based on U-Shape architecture, a novel multi-scale transformer non-local (MsTNL) module is designed and inserted into the top of encoder path, aiming at capturing multi-scale non-local features with long-range dependencies from different layers of encoder. Meanwhile, a novel multi-semantic global channel and spatial joint attention module (MsGCS) between encoder and decoder is proposed to guide the model to fuse different semantic features, thereby improving the model's ability to learn multi-semantic global contextual information. Furthermore, to alleviate the shortage of labeled data, we propose a novel semi-supervised version of MsTGANet (Semi-MsTGANet) based on pseudo-labeled data augmentation strategy, which can leverage a large amount of unlabeled data to further improve the segmentation performance. Finally, comprehensive experiments are conducted to evaluate the performance of the proposed MsTGANet and Semi-MsTGANet. The experimental results show that our proposed methods achieve better segmentation accuracy than other state-of-the-art CNN-based methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.