Abstract

In this paper, we present a novel bottom-up salient object detection approach by exploiting the relationship between the saliency detection and the null space learning. A key observation is that saliency of an image segment can be estimated by measuring the distance to the single point, which represents the background or foreground salient samples in the null spaces. We apply the null Foley–Sammon transformation to model the null spaces of the background samples or foreground salient samples, where the potentially large and complex intra-class variations of the samples are totally removed and the specific features of the respective classes are represented by a single point. Afterward, we formulate the separation of the saliency regions from the background as a distance measurement to this single point in the null space. An optimization algorithm is devised to fuse the background samples based saliency map and foreground samples based saliency map. Results on five benchmark datasets show that the proposed method achieves superior performance compared with the newest state-of-the-art methods in terms of different evaluation metrics, especially for complex natural images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.