Abstract

Local feature matching is a part of many large vision tasks. Local feature matching usually consists of three parts: feature detection, description, and matching. The matching task usually serves a downstream task, such as camera pose estimation, so geometric information is crucial for the matching task. We propose the geometric feature embedding matching method (GFM) for local feature matching. We propose the adaptive keypoint geometric embedding module dynamic adjust keypoint position information and the orientation geometric embedding displayed modeling of geometric information about rotation. Subsequently, we interleave the use of self-attention and cross-attention for local feature enhancement. The predicted correspondences are multiplied by the local features. The correspondences are solved by computing dual-softmax. An intuitive human extraction and matching scheme is implemented. In order to verify the effectiveness of our proposed method, we performed validation on three datasets (MegaDepth, Hpatches, Aachen Day-Night v1.1) according to their respective metrics, and the results showed that our method achieved satisfactory results in all scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call