Abstract

Mixed visual scenes and cluttered background commonly exist in natural images, which forms a challenge for saliency detection. In dealing with complex images, there are two kinds of deficiencies in the existing saliency detection methods: ambiguous object boundaries and fragmented salient regions. To address these two limitations, we propose a novel edge-oriented framework to improve the performance of existing salient detection methods. Our framework is based on two interesting insights: 1) human eyes are sensitive to the edges between foreground and background even there is hardly any difference in terms of saliency, 2) Guided by semantic integrity, human eyes tend to view a visual scene as several objects, rather than pixels or superpixels. The proposed framework consists of the following three parts. First, an edge probability map is extracted from an input image. Second, the edge-based over-segmentation is obtained by sharpening the edge probability map, which is ultilized to generate edge-regions using an edge-strength based hierarchical merge model. Finally, based on the prior saliency map generated by existing methods, the framework assigns each edge-region with a saliency value. Based on four publically available datasets, the experiments demonstrate that the proposed framework can significantly improve the detection results of existing saliency detection models, which is also superior to other state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.