Abstract

Feature matching is an essential step in a wide range of photogrammetry and computer vision tasks but limited by the ambiguities of local descriptors. Hence, numerous false feature matches (outliers) will inevitably be generated, especially in complex scenarios. Motion coherence can establish the statistical relationships among sparse motions to remove outliers, assuming that the true matches are coherent, and the false matches are randomly scattered. However, existing methods model motion coherence either in a local spatial context or without considering rich matching priors, thereby leading to numerous matching failures when outlier rates are high. In this study, we propose a context-enhanced motion coherence modeling (CoMo) method to distinguish consistent correct motions from erroneous matches for robust outlier rejection. The CoMo method deploys a consistency-aware motion descriptor to encode the consistency-related matching priors of feature matches as a high-dimensional representation, which can provide a rich context for differentiating heterogeneous motions. Based on this discriminative descriptor, we further introduce deformable affine transformation (DAT) as a proxy for motion and fit the coherent motions from the candidate matches with a globally smooth function under the truncated least squares estimation framework. Extensive experiments on multiple large datasets (including the Image Matching Challenge at CVPR 2020) demonstrate that the CoMo method can effectively model motion coherence from noisy candidate matches and outperform other state-of-the-art methods in outlier rejection and relative camera pose estimation. The code is available at https://github.com/geovsion/CoMo.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call