Abstract

In this paper, we propose a novel automatic computer-aided method to detect polyps for colonoscopy videos. To capture perceptually and semantically meaningful salient polyp regions, we first segment images into multilevel superpixels. Each level corresponds to different sizes of superpixels. Rather than adopting hand-designed features to describe these superpixels in images, we employ sparse autoencoder (SAE) to learn discriminative features in an unsupervised way. Then, a novel unified bottom-up and top-down saliency method is proposed to detect polyps. In the first stage, we propose a weak bottom-up (WBU) saliency map by fusing the contrast-based saliency and object-center-based saliency together. The contrast-based saliency map highlights image parts that show different appearances compared with surrounding areas, whereas the object-center-based saliency map emphasizes the center of the salient object. In the second stage, a strong classifier with multiple kernel boosting is learned to calculate the strong top-down (STD) saliency map based on samples directly from the obtained multilevel WBU saliency maps. We finally integrate these two-stage saliency maps from all levels together to highlight polyps. Experiment results achieve 0.818 recall for saliency calculation, validating the effectiveness of our method. Extensive experiments on public polyp datasets demonstrate that the proposed saliency algorithm performs better compared with state-of-the-art saliency methods to detect polyps.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call