Abstract

AbstractThe retinal layer segmentation from OCT images is a fundamental and important task in the diagnosis and monitoring of eye-related diseases. The quest for improved accuracy is driving the use of increasingly large dataset with fully pixel-level layer annotations. But the manual annotation process is expensive and tedious, further, the annotators also need sufficient medical knowledge which brings a great burden on the doctors. We observe that there exist a large number of repetitive texture patterns in the flatten OCT images. More surprisingly, by significantly reducing the annotation from 100% to 10%, even to 1%, the performance of a segmentation model only drops a little, i.e., error from \(2.53\, \upmu \text {m}\) to \(2.76\,\upmu \text {m}\), and to \(3.27\,\upmu \text {m}\) on a validation set, respectively. Such observation motivates us to deeply investigate the redundancies of the annotation in the feature space which would definitely facilitate the annotation for medical images. To greatly reduce the expensive annotation costs, we propose a new annotation-efficient learning paradigm by annotating a fixed and limited number of pixels for each layer in each image. Considering the redundancies in the repetitive patterns in each layer of OCT images, we employ a VQ memory bank to store the extracted features on the whole datasets to augment the visual representation. The experimental results on two public datasets validate the effectiveness of our model. With only 10 annotated pixels for each layer in an image, our performance is very close to the previous methods trained with the whole fully annotated dataset. KeywordsOCT layer segmentationAnnotation-efficient learning

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.