Abstract

Algorithms for localising colorectal polyps have been studied extensively; however, they were often trained and tested using the same database. In this study, we present a new application of a unified and real-time object detector based on You-Only-Look-Once (YOLO) convolutional neural network (CNN) for localizing polyps with bounding boxes in endoscopic images. The model was first pre-trained with non-medical images and then fine-tuned with colonoscopic images from three different databases, including an image set we collected from 106 patients using narrow-band (NB) imaging endoscopy. YOLO was tested on 196 white light (WL) images of an independent public database. YOLO achieved a precision of 79.3% and sensitivity of 68.3% with time efficiency of 0.06 sec/frame in the localization task when trained by augmented images from multiple WL databases. In conclusion, YOLO has great potential to be used to assist endoscopists in localising colorectal polyps during endoscopy. CNN features of WL and NB endoscopic images are different and should be considered separately. A large-scale database that covers different scenarios, imaging modalities and scales is lacking but crucial in order to bring this research into reality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.