Abstract

Robust image matching is a fundamental and long-standing open problem in computer vision. Conventional wisdom has exploited redundancy to improve the robustness of image matching (e.g., from pairwise to multi-image correspondence), which works well in the spatial domain. Inspired by the success of global optimization-based approaches, we propose a novel extension of cycle consistency from multi-image to multi-descriptor matching in this paper, which integrates useful information from the feature domain. More specifically, we build upon previous work of permutation synchronization and construct a novel cycle consistency model for multi-descriptor matching. The construction of cycle consistency model is based on the analogy between multi-image matching and multi-descriptor matching in a virtual universe. It allows us to formulate multi-image and multi-descriptor matching as a constrained global optimization problem. We have developed a spectral relaxation algorithm to solve this optimization problem, admitting an efficient implementation via fast singular value decomposition (SVD). To demonstrate the robustness of the proposed method named Cycle Consistency Fusion (C2F), we have evaluated it in terms of both raw matching accuracy (pairwise or multi-image) and several higher level downstream tasks such as homography and camera pose estimation. Extensive experimental results have shown that our C2F outperforms state-of-the-art methods consistently across different datasets and vision tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call