Abstract

This letter proposes a novel saliency detection framework by propagating saliency of similar images retrieved from large and diverse Internet image collections to boost saliency detection performance effectively. For the input image, a group of similar images is retrieved based on the saliency weighted color histograms and the Gist descriptor from Internet image collections. Then, a pixel-level correspondence process between images is performed to guide the saliency propagation from the retrieved images. Both initial saliency map and correspondence saliency map are exploited to select the training samples by using the graph cut-based segmentation. Finally, the training samples are input into a set of weak classifiers to learn the boosted classifier for generating the boosted saliency map, which is integrated with the initial saliency map to generate the final saliency map. Experimental results on two public image datasets demonstrate that the proposed model can achieve the better saliency detection performance than the state-of-the-art single-image saliency models and co-saliency models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.