Abstract

In this study, the authors propose a novel framework for top-down (TD) saliency detection, which is well suited to locate category-specific objects in natural images. Saliency value is defined as the probability of a target based on its visual feature. They introduce an effective coding strategy called locality constrained contextual coding (LCCC) that enforces locality and contextual constraints. Furthermore, a contextual pooling operation is presented to take advantages of feature contextual information. Benefiting from LCCC and contextual pooling, the obtained feature representation has high discriminative power, which makes the authors' saliency detection method achieving competitive results with existing saliency detection algorithms. They also include bottom-up cues into their framework to supplement the proposed TD saliency algorithm. Experimental results on three datasets (Graz-02, Weizmann Horse and PASCAL VOC 2007) show that the proposed framework outperforms state-of-the-art methods in terms of visual quality and accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call