Abstract
Accurate gastrointestinal (GI) lesion segmentation is crucial in diagnosing digestive tract diseases. An automatic lesion segmentation in endoscopic images is vital to relieving physicians' burden and improving the survival rate of patients. However, pixel-wise annotations are highly intensive, especially in clinical settings, while numerous unlabeled image datasets could be available, although the significant results of deep learning approaches in several tasks heavily depend on large labeled datasets. Limited labeled data also hinder trained models' generalizability under fully supervised learning for computer-aided diagnosis (CAD) systems. This work proposes a generative adversarial learning-based semi-supervised segmentation framework for GI lesion diagnosis in endoscopic images to tackle the challenge of limited annotations. The proposed approach leverages limited annotated and large unlabeled datasets in the training networks. We extensively tested the proposed method on 4880 endoscopic images. Compared with current related works, the proposed method validates better results (Dice similarity coefficient = 89.42 ± 3.92, Intersection over union = 80.04 ± 5.75, Precision = 91.72 ± 4.05, Recall = 90.11 ± 5.64, and Hausdorff distance = 23.28 ± 14.36) on the challenging multi-sited datasets, confirming the effectiveness of the proposed framework. We explore a semi-supervised lesion segmentation method to employ the full use of multiple unlabeled endoscopic images to improve lesion segmentation accuracy. Experimental results confirmed the potential of our method and outperformed promising results compared with the current related works. The proposed CAD system can minimize diagnostic errors.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.