Abstract

Modeling of visual saliency is an important domain of research in computer vision, given the significant role of attention mechanisms during neural processing of visual information. This work presents a new approach for the construction of image representations of salient locations, generally known as saliency maps. The developed method is based on an efficient comparison scheme for the local sparse representations deriving from non-overlapping image patches. The sparse coding stage is implemented via an overcomplete dictionary trained with a soft-competitive bio-inspired algorithm and the use of natural images. The resulting local sparse codes are pairwise compared using the Hamming distance as a gauge of their co-activation. The calculated distances are used to quantify the saliency strength for each individual patch, and then, the saliency values are non-linearly filtered to form the final map. The evaluation results obtained on four image databases, demonstrate the competitive performance of the proposed approach compared to several state-of-the-art saliency modeling algorithms. More importantly, the proposed scheme is simple, efficient, and robust under a variety of visual conditions. Thus, it appears as an ideal solution for a hardware implementation of a frontend saliency modeling module in a computer vision system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call