Abstract

Salient region of the image, which is composed of salient or interest points, is the most informative part of the image. In this paper, a saliency-based bottom-up visual attention computational model motivated by visual physiological experimental results is used to detect salient region and extract salient points of images. Meanwhile, a method to select number of the salient points to be extracted for each image is presented. Two salient visual features based on the visual attention model were proposed for image retrieval. One feature is the ldquoattention histogramrdquo, which only counts the frequencies of a visual feature in the salient region of the image. The other is the ldquosalient image signature histogram and spatial FOAs(focus of attention) anglogramrdquo, which codes both the local properties around salient points of the image and the spatial information of FOAs. Image retrieval experiments were carried out to evaluate the proposed features. For comparison, traditional global histogram was also used in the experiments. Preliminary experimental results showed that the proposed visual attention-based salient features can achieve encouraging retrieval results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.