Abstract

In computer vision, an object can be modeled in two main ways: by explicitly measuring its characteristics in terms of feature vectors, and by capturing the relations which link an object with some exemplars, that is, in terms of similarities. In this paper, we propose a new similarity-based descriptor, dubbed structural similarity cross-covariance tensor (SS-CCT), where self-similarities come into play: Here the entity to be measured and the exemplar are regions of the same object, and their similarities are encoded in terms of cross-covariance matrices. These matrices are computed from a set of low-level feature vectors extracted from pairs of regions that cover the entire image. SS-CCT shares some similarities with the widely used covariance matrix descriptor, but extends its power focusing on structural similarities across multiple parts of an image, instead of capturing local similarities in a single region. The effectiveness of SS-CCT is tested on many diverse classification scenarios, considering objects and scenes on widely known benchmarks (Caltech-101, Caltech-256, PASCAL VOC 2007 and SenseCam). In all the cases, the results obtained demonstrate the superiority of our new descriptor against diverse competitors. Furthermore, we also reported an analysis on the reduced computational burden achieved by using and efficient implementation that takes advantage from the integral image representation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.