Abstract
Feature matching in computer vision is crucial but challenging in weakly textured scenes due to the lack of pattern repetition. We introduce the SwinMatcher feature matching method, aimed at addressing the issues of low matching quantity and poor matching precision in weakly textured scenes. Given the inherently significant local characteristics of image features, we employ a local self-attention mechanism to learn from weakly textured areas, maximally preserving the features of weak textures. To address the issue of incorrect matches in scenes with repetitive patterns, we use a cross-attention and positional encoding mechanism to learn the correct matches of repetitive patterns in two scenes, achieving higher matching precision. We also introduce a matching optimization algorithm that calculates the spatial expected coordinates of local two-dimensional heat maps of correspondences to obtain the final sub-pixel level matches. Experiments indicate that, under identical training conditions, the SwinMatcher outperforms other standard methods in pose estimation, homography estimation, and visual localization. It exhibits strong robustness and superior matching in weakly textured areas, offering a new research direction for feature matching in weakly textured images.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have