Abstract

In this paper, we address the problem of fusing various saliency detection methods such that the fusion result outperforms each of the individual methods. We observe that the saliency regions shown in different saliency maps are with high probability covering parts of the salient object. With image regions being represented by the saliency values of multiple saliency maps, the object regions have strong correlation and thus lie in a low-dimensional subspace. Meanwhile, most of background regions tend to have lower saliency values in various saliency maps. They are also strongly correlated and lie in a lowdimensional subspace that is independent of the object subspace. Therefore, an image can be represented as the combination of two low rank matrices. To obtain a unified low rank matrix that represents the salient object, this paper presents a double low rank matrix recovery model for saliency fusion. The inference process is formulated as a constrained nuclear norm minimization problem, which is convex and can be solved efficiently with the alternating direction method of multipliers (ADMM). Furthermore, to reduce the computational complexity of the proposed saliency fusion method, a saliency model selection strategy based on the sparse representation is proposed. Experiments on five datasets show that our method consistently outperforms each individual saliency detection approach and other state-of-the-art saliency fusion methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call