Abstract

We present an improved saliency detection method by means of hypergraphs on adaptive multiscales (HAM). An input image is charaterized by hypergraphs in which hyperedges are used to capture contextual properties of regions. Thus, the saliency detection problem is transformed into that of finding salient vertices and hyperedges in hypergraphs. The HAM method first adjusts adaptively the ranges of pixel-values in R, G, B channels in an input image and uses the ranges to determine adaptive scales. And then, it models the image as a hypergraph for each scale in which hyperedges are clustered by means of agglomerative mean-shift. The HAM method can get more single-scale hypergraphs and thus has higher accuracy than the previous ones because each hypergraph is on an adaptive scale instead of a fixed scale. Extensive experiments on three benchmark datasets demonstrate that the HAM method improves the performance of saliency detection, especially for the images with narrow ranges of pixel-values.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call