Abstract

Image feature description and matching is widely used in computer vision, such as camera pose estimation. Traditional feature descriptions lack the semantic and spatial information, and give rise to a large number of feature mismatches. In order to improve the accuracy of image feature matching, a feature description and matching method, based on local semantic information fusion and feature spatial consistency, is proposed in this paper. Once object detection is used on images, feature points are then extracted, and image patches with various sizes surrounding these points are clipped. These patches are sent into the Siamese convolution network to get their semantic vectors. Then, semantic fusion description of feature points is obtained by weighted sum of the semantic vectors, and their weights optimized by particle swarm optimization (PSO) algorithm. When matching these feature points using their descriptions, feature spatial consistency is calculated based on the spatial consistency of matched objects, and the orientation and distance constraint of adjacent points within matched objects. With the description and matching method, the feature points are matched accurately and effectively. Our experiment results showed the efficiency of our methods.

Highlights

  • Image feature description and matching is the basic work of many tasks in image processing, such as image mosaic, camera pose estimation, 3D reconstruction, etc

  • The SIFT feature descriptor uses Euclidean distance as the judgment standard between descriptors, while the BRIEF descriptor [5] is a kind of binary descriptor, and hamming distance is used as the judgment standard to describe the correspondence of two feature points

  • We proposed a semantic fusion description of feature points, and a method of feature matching based on feature spatial consistency

Read more

Summary

Introduction

Image feature description and matching is the basic work of many tasks in image processing, such as image mosaic, camera pose estimation, 3D reconstruction, etc. Researchers have carried out a lot of research on image feature extraction and description, and have produced many classic feature description and extraction methods, such as SIFT (scale-invariant feature transform) [1], SURF (speeded up robust features) [2], ORB (oriented FAST and rotated BRIEF) [3], and FAST (features from accelerated segment test) [4] These methods obtain image feature points and their descriptors by searching the local extremum in the image, and describe the features using the luminance information of their neighborhood. Convolution neural network has achieved remarkable results in the image processing Through training, it can learn the semantic information from local image patches and object targets to the whole image. Object detection, images are separated into different feature-matching method based on featureWith spatial consistency.

Image Feature Extraction and Description Methods
Feature-Matching Methods
Object Detection Methods
Feature Description Method Based on Semantic Fusion
ORB Feature Extraction
In contrast
Semantic
Weights
Feature-Matching Algorithm Based on Feature Spatial Consistency
Object Spatial Consistency Based on SSD
Obtain
Distance
Feature Matching with Feature Spatial Consistency
Parameters Optimization of Feature Semantic Description
Feature
Our Method
13. Feature-matching results usingour ourmatching matching method
14. Feature-matching
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call