Abstract

Lesion detection is a critical component of disease diagnosis, but the manual segmentation of lesions in medical images is time-consuming and experience-demanding. These issues have recently been addressed through deep learning models. However, most of the existing algorithms were developed using supervised training, which requires time-intensive manual labeling and prevents the model from detecting unaware lesions. As such, this study proposes a weakly supervised learning network based on CycleGAN for lesions segmentation in full-width optical coherence tomography (OCT) images. The model was trained to reconstruct underlying normal anatomic structures from abnormal input images, then the lesions can be detected by calculating the difference between the input and output images. A customized network architecture and a multi-scale similarity perceptual reconstruction loss were used to extend the CycleGAN model to transfer between objects exhibiting shape deformations. The proposed technique was validated using an open-source retinal OCT image dataset. Image-level anomaly detection and pixel-level lesion detection results were assessed using area-under-curve (AUC) and the Dice similarity coefficient, producing results of 96.94% and 0.8239, respectively, higher than all comparative methods. The average test time required to generate a single full-width image was 0.039 s, which is shorter than that reported in recent studies. These results indicate that our model can accurately detect and segment retinopathy lesions in real-time, without the need for supervised labeling. And we hope this method will be helpful to accelerate the clinical diagnosis process and reduce the misdiagnosis rate.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.