Abstract

Feature matching via local descriptors is one of the most fundamental problems in many computer vision tasks, as well as in the remote sensing image processing community. For example, in terms of remote sensing image registration based on the feature, feature matching is a vital process to determine the quality of transform model. While in the process of feature matching, the quality of feature descriptor determines the matching result directly. At present, the most commonly used descriptor is hand-crafted by the designer’s expertise or intuition. However, it is hard to cover all the different cases, especially for remote sensing images with nonlinear grayscale deformation. Recently, deep learning shows explosive growth and improves the performance of tasks in various fields, especially in the computer vision community. Here, we created remote sensing image training patch samples, named Invar-Dataset in a novel and automatic way, then trained a deep learning convolutional neural network, named DescNet to generate a robust feature descriptor for feature matching. A special experiment was carried out to illustrate that our created training dataset was more helpful to train a network to generate a good feature descriptor. A qualitative experiment was then performed to show that feature descriptor vector learned by the DescNet could be used to register remote sensing images with large gray scale difference successfully. A quantitative experiment was then carried out to illustrate that the feature vector generated by the DescNet could acquire more matched points than those generated by hand-crafted feature Scale Invariant Feature Transform (SIFT) descriptor and other networks. On average, the matched points acquired by DescNet was almost twice those acquired by other methods. Finally, we analyzed the advantages of our created training dataset Invar-Dataset and DescNet and gave the possible development of training deep descriptor network.

Highlights

  • Feature matching is the basis of various remote sensing processing tasks, such as image retrieval [1,2], object recognition [3,4] and image registration [5,6,7,8]. eature matching can contribute to calibrate attitude sensor performance [9,10]

  • We proposed a novel way of creating training samples automatically, which makes deep learning possible for feature matching of remote sensing images

  • This replacement can increase the number of correct matched points in the task of remote sensing image registrations significantly, even for different remote sensing images with different resolutions, large grayscale differences and so on. This proved that the DescNet can generate a better and more robust feature description vector for feature point, which has a better ability to determine the matched points

Read more

Summary

Introduction

Feature matching is the basis of various remote sensing processing tasks, such as image retrieval [1,2], object recognition [3,4] and image registration [5,6,7,8]. eature matching can contribute to calibrate attitude sensor performance [9,10]. For non-linear brightness variation, one common problem in remote sensing images is that the calculated principal direction of the SIFT feature point are unreliable, because of the varieties of the statistic of gradients around the feature point. This problem will produce many false matched points or fewer matched points, which results in a mis-registration or failure of registration. The descriptor of feature points detected from images with non-linear brightness variation is a bottleneck problem for remote sensing image further processing, especially for the registration task

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call