Abstract

Mimicking the biological visual attention mechanism to discriminate visually appealing regions in natural scenes has been a hot research topic in recent years. However, designing computational models with self-driven capability for open world scenarios remains a challenging task which deserves to be further studied. In this paper, we propose an unsupervised learning approach to detect salient objects from images by fully exploiting the multi-context semantic information of the scenes. Specifically, a self-driven model combing the idea of discriminative metric learning and structured sparse constraint is designed to find an optimal semantic mapping space for robust scene specific saliency prediction from complex environments. Meanwhile, a heuristic alternating optimization algorithm is developed to remove the ambiguity in the coarse geometric prior to generate a fine-grained discriminative model for saliency. On the basis of this, multi-context visual scenes are jointly modeled and fused to capture the image hierarchical structures for high-quality saliency map generation. Finally, we conduct experiments to verify the effectiveness of the proposed approach on four saliency benchmark datasets and compare it with 18 state-of-the-art saliency detection methods. Both qualitative saliency map and quantitative numerical index results indicate that our method has superior detection performance than the other counterparts under diversified scenes. Also, the proposed approach is applied to model wide synthetic aperture radar images for rapid target detection and promising results are obtained.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call