Abstract

Establishing dense correspondences across semantically similar images is a challenging task, due to the large intra-class variation caused by the unconstrained setting of images, which is prone to cause matching errors. To suppress potential matching ambiguity, NCNet explores the neighborhood consensus pattern in the 4D space of all possible correspondences, which is based on the assumption that the correspondence is continuous in space. We retain the neighborhood consensus constraint, while introducing semantic segmentation information into the features, which makes them more distinguishable and reduces matching ambiguity from a feature perspective. Specifically, we combine the semantic segmentation network to extract semantic features and the 4D convolution to explore 4D-space context consistency. Experiments demonstrate that our algorithm has good semantic matching performances and semantic segmentation information can improve semantic matching accuracy.

Highlights

  • Image matching is a basic task in the computer vision field

  • Contains 20 categories and a total of more than 1300 image pairs. These images are annotated with keypoints, which are used for network training and the evaluation of semantic matching performance

  • We warped the image according to the estimated dense semantic matching field and analyzed the matching accuracy of all points according to the warping quality

Read more

Summary

Introduction

Image matching is a basic task in the computer vision field. Semantic matching is different, as it establishes the correspondence field between two images based on semantic consistency [6,7,8,9,10], in other words, it looks for the point pair with the same semantics across two images. It is impractical to estimate correspondences by photo-geometric consistency, we can calculate the matching relationships according to the same semantic contents. As a building block technology, semantic matching has been widely used in computer vision applications, such as style/motion transfer [11,12], image morphing [13], exemplar-based colorization [14], and image synthesis/translation/super-resolution [15,16,17]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call