Abstract

Colonoscopy image classification is an image classification task that predicts whether colonoscopy images contain polyps or not. It is an important task input for an automatic polyp detection system. Recently, deep neural networks have been widely used for colonoscopy image classification due to the automatic feature extraction with high accuracy. However, training these networks requires a large amount of manually annotated data, which is expensive to acquire and limited by the available resources of endoscopy specialists. We propose a novel method for training colonoscopy image classification networks by using self-supervised visual feature learning to overcome this challenge. We adapt image denoising as a pretext task for self-supervised visual feature learning from unlabeled colonoscopy image dataset, where noise is added to the image for input, and the original image serves as the label. We use an unlabeled colonoscopy image dataset containing 8,500 images collected from the PACS system of Hospital 103 to train the pretext network. The feature exactor of the pretext network trained in a self-supervised way is used for colonoscopy image classification. A small labeled dataset from the public colonoscopy image dataset Kvasir is used to fine-tune the classifier. Our experiments demonstrate that the proposed self-supervised learning method can achieve a high colonoscopy image classification accuracy better than the classifier trained from scratch, especially at a small training dataset. When a dataset with only annotated 200 images is used for training classifiers, the proposed method improves accuracy from 72,16% to 93,15% compared to the baseline classifier.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.