Abstract

Visual saliency detection has become an active research direction in recent years. A large number of saliency models, which can automatically locate objects of interest in images, have been developed. As these models take advantage of different kinds of prior assumptions, image features, and computational methodologies, they have their own strengths and weaknesses and may cope with only one or a few types of images well. Inspired by these facts, this paper proposes a novel salient object detection approach with the idea of inferring a superior model from a variety of previous imperfect saliency models via optimally leveraging the complementary information among them. The proposed approach mainly consists of three steps. First, a number of existing unsupervised saliency models are adopted to provide weak/imperfect saliency predictions for each region in the image. Then, a fusion strategy is used to fuse each image region's weak saliency predictions into a strong one by simultaneously considering the performance differences among various weak predictions and various characteristics of different image regions. Finally, a local spatial consistency constraint that ensures high similarity of the saliency labels for neighboring image regions with similar features is proposed to refine the results. Comprehensive experiments on five public benchmark datasets and comparisons with a number of state-of-the-art approaches can demonstrate the effectiveness of the proposed work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call