Abstract
With the increasing availability of low-cost — yet precise — depth cameras, “texture+depth” content has become more and more popular in several computer vision and 3D rendering tasks. Indeed, depth images bring enriched geometrical information about the scene which would be hard and often impossible to estimate from conventional texture pictures. In this paper, we investigate how the geometric information provided by depth data can be employed to improve the stability of local visual features under a large spectrum of viewpoint changes. Specifically, we leverage depth information to derive local projective transformations and compute descriptor patches from the texture image. Since the proposed approach may be used with any blob detector, it can be seamlessly integrated into the processing chain of state-of-the-art visual features such as SIFT. Our experiments show that a geometry-aware feature extraction can bring advantages in terms of descriptor distinctiveness with respect to state-of-the-art scale and affine-invariant approaches.
Paper version not known (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have