Abstract

The traditional approach for point set registration is based on matching feature descriptors between the target object and the query image and then the fundamental matrix is calculated robustly using RANSAC to align the target in the image. However, this approach can easily fail in the presence of occlusion, background clutter and changes in scale and camera viewpoint, being the RANSAC algorithm unable to filter out many outliers. In our proposal the target is represented by an attribute graph, where its vertices represent salient features describing the target object and its edges encode their spatial relationships. The matched keypoints between the attribute graph and the descriptors in the query image are filtered taking into account features such as orientation and scale, as well as the structure of the graph. Preliminary results using the Stanford Mobile Visual search data set and the Stanford Streaming Mobile Augmented Reality Dataset show the best behaviour of our proposal in valid matches and lower computational cost in relation to the standard approach based on RANSAC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call