Abstract

Feature point matching between two images is an essential part of 3D reconstruction, augmented reality, panorama stitching, etc. The quality of the initial feature point matching stage greatly affects the overall performance of a system. We present a unified feature point extraction-matching method, making use of semantic segmentation results to constrain feature point matching. To integrate high-level semantic information into feature points efficiently, we propose a unified feature point extraction and matching network, called SP-Net, which can detect feature points and generate feature descriptors simultaneously and perform feature point matching with accurate outcomes. Compared with previous works, our method can extract multi-scale context of the image, including shallow information and high-level semantic information of the local area, which is more stable when handling complex conditions such as changing illumination or large viewpoint. In evaluating the feature-matching benchmark, our method shows superior performance over the state-of-art method. As further validation, we propose SP-Net++ as an extension for 3D reconstruction. The experimental results show that our neural network can obtain accurate feature point positioning and robust feature matching to recover more cameras and get a well-shaped point cloud. Our semantic-assisted method can improve the stability of feature points as well as specific applicability for complex scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call