Abstract

In this paper, we propose a bottom-up visual saliency detection algorithm. Different from most previous methods that mainly concentrate on image object, we take both background and foreground into consideration. First, we collect background seeds from image border superpixels by boundary information and calculate a background-based saliency map. Second, we select foreground seeds by segmenting the first-stage saliency map via adaptive threshold and compute a foreground-based saliency map. Third, the two saliency maps are integrated by the proposed unified function. Finally, we refine the integrated result to obtain a more smooth and accurate saliency map. Moreover, the unified formula also proves to be effective in combining the proposed approach with other models. Experiments on publicly available data sets demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call