Abstract

Saliency detection is still very challenging in computer vision and image processing. In this paper, we propose a novel visual saliency detection framework via bagging-based saliency distribution learning (BSDL). Given an input image, we firstly segment it into superpixels as basic units. Then two prior knowledge containing background prior and center prior are integrated to generate an initial prior map, which is used to select training samples from all superpixels to train the BSDL model. Specifically, the BSDL contains two stages: In the first stage, we use bagging-based sampling method to train K saliency classifiers from all training samples. K saliency classifiers are used to predict each superpixel saliency value. In the second stage, we aim to learn a saliency distribution model, whose goal is to infer the relationship between each classifier and each superpixel. i.e., for each superpixel, the BSDL not only trains K saliency classifiers to predict its saliency value, but also infers the reliability of using each saliency classifier to predict its saliency value. As a result, each superpixel’s saliency value is determined by its K prediction saliency values and saliency distribution. After the BSDL, we propose a so called foreground consistency saliency optimization framework (FCSO) to further refine saliency map obtained by BSDL. To improve computation efficiency, a prejudgment rule is proposed to evaluate the quality of saliency map obtained by BSDL, which is used to decide whether the FCSO is needed for input image. Experimental results on four public datasets demonstrate the superiority of the proposed method than other state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call