Abstract

During the last decade, a wealth of research was devoted to building integrated vision systems capable of both recognising objects and providing their spatial information. Object recognition and pose estimation are among the most popular and challenging tasks in computer vision. Towards this end, in this work the authors propose a novel algorithm for objects' depth estimation. Moreover, they comparatively study two common two-part approaches, namely the scale invariant feature transform SIFT and the speeded-up robust features algorithm, in the particular application of location assignment of an object in a scene relatively to the camera, based on the proposed algorithm. Experimental results prove the authors' claim that an accurate estimation of objects' depth in a scene can be obtained by taking into account extracted features' distribution over the target's surface.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call