Abstract

Finding a reliable correspondence between two feature point sets efficiently and accurately is the first step in many computer vision applications. It is a crucial prerequisite for many remote sensings, target recognition, and photogrammetry applications. However, in different application scenarios, images may undergo different degrees of deformation, which makes the predefined geometric model no longer appropriate. This article attempts to remove mismatches between the set of assumed matching points. In order to achieve this goal, we propose an effective matching method, called ”visualized local structure generation-Siamese attention” network (VLSG-SANet), which transform the task of eliminating mismatching of feature points into a dynamic visual similarity assessment. The VLSG-SANet can generate visual descriptors by using the local structure information about each feature point. We embed the attention module based on the Siamese network for the generated feature descriptors, allowing the network to adjust the descriptors dynamically. In order to prove the robustness and versatility of the VLSG-SANet algorithm, we have performed good experiments on open remote sensing data images and got good experimental results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.