Abstract

Detection of salient regions in natural scenes is useful for computer vision applications, such as image segmentation, object recognition, and image retrieval. In this paper, we propose a new bottom-up visual saliency detection method after analyzing the weakness of the frequency tuned saliency detection method. The proposed method uses the YCbCr color space to present the image and computes the Mahalanobis distance between the pixel and the image mean for each color channel or feature. Then the weights of all features are evaluated and used to produce the final saliency map in the process of feature fusion. Our method is easier to implement and is computationally efficient. We compare our approach to five state-of-the-art saliency detection methods using publicly available ground truth. The experimental results show that the proposed method can effectively detect salient regions and outperforms the other five methods in both qualitative and quantitative terms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call