Abstract

Training deep learning methods for optical coherence tomography (OCT) retinal and choroidal layer segmentation is a challenge when data is scarce. In medical image analysis, this is often the case with a lack of data sharing due to confidentiality agreements and data privacy concerns which is further exacerbated in cases of rare pathologies. Even where OCT data is readily available, performing the requisite annotations is time consuming, costly, and error-prone. Data augmentation and semi-supervised learning (SSL) are two techniques employed in deep learning to enhance training in these situations. In this study, we extend our previous work proposing an enhanced StyleGAN2-based data augmentation method for OCT images by employing SSL through a novel cross-localisation technique. The technique increases the diversity of the synthetic data by automatically incorporating styles from unlabelled data with those from labelled data. The method can be used to extend StyleGAN2 as the core idea is simple, yet highly performant. In this work, we optimise the method through a set of ablations and propose the use of a targeted task-specific model selection technique for more optimal generator selection, further boosting performance. The method is applied to OCT retinal and choroidal layer segmentation, demonstrating its effectiveness through substantial patch classification performance improvements as well as significant reductions in choroidal layer segmentation error.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.