Abstract

In this study, the authors propose a novel framework for top-down (TD) saliency detection, which is well suited to locate category-specific objects in natural images. Saliency value is defined as the probability of a target based on its visual feature. They introduce an effective coding strategy called locality constrained contextual coding (LCCC) that enforces locality and contextual constraints. Furthermore, a contextual pooling operation is presented to take advantages of feature contextual information. Benefiting from LCCC and contextual pooling, the obtained feature representation has high discriminative power, which makes the authors' saliency detection method achieving competitive results with existing saliency detection algorithms. They also include bottom-up cues into their framework to supplement the proposed TD saliency algorithm. Experimental results on three datasets (Graz-02, Weizmann Horse and PASCAL VOC 2007) show that the proposed framework outperforms state-of-the-art methods in terms of visual quality and accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.