Abstract

Since image background is normally composed of congenial regions, it can be represented by a feature dictionary via sparse representation. Based on this theory, the authors propose a novel bottom‐up saliency detection method that unites the syncretic merits of sparse representation and multi‐hierarchical layers. In contrast to most pre‐existing sparse‐based approaches that only highlight the boundaries of a target, the proposed method highlights the entire object even if it is large. Given a source image, a multi‐scale background dictionary is structured with the features form different layers. Each region of the image is then reconstructed by the dictionary to compute its reconstruction error as a saliency score. Although a reconstruction map can be generated by the saliency scores, it is not good enough to be the final result because of low resolution and high error detection rates. Therefore, in middle cue, they propose a multi‐scale contour zooming approach to address the error detection across the hierarchical layers. To improve the resolution of the final detection, a pixel‐level rectification based on the Bayesian observation likelihood is calculated as the bottom cue. Combining sparse representation and multi‐scale correction, the precision of the final saliency map is significantly improved for the detection results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call