Abstract

A feature point extraction and matching algorithm based on affine transformation space is proposed to address the shortcomings of existing feature extraction and matching algorithms in large view scenes with few effective matching points and slow matching speed. The algorithm first constructs the affine change space to simulate the viewpoint change and obtains the affine invariance; then avoids the feature point detection in the invalid region by dividing the valid region; in the feature description stage, the ORB algorithm is incorporated into the affine change space, while the gradient contrast information of multiple directions in the feature point sampling region is fused to obtain the final binary descriptor. Through experiments on large-view datasets and sequence images, it is demonstrated that the algorithm has better matching effect in large-view scenes, and also has more advantages in time efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call