Abstract

With the continuous development of underwater vision technology, more and more remote sensing images could be obtained. In the underwater scene, sonar sensors are currently the most effective remote perception devices, and the sonar images captured by them could provide rich environment information. In order to analyze a certain scene, we often need to merge the sonar images from different periods, various sonar frequencies and distinctive viewpoints. However, the above scenes will bring nonlinear intensity differences to the sonar images, which will make traditional matching methods almost ineffective. This paper proposes a non-linear intensity sonar image matching method that combines local feature points and deep convolution features. This method has two key advantages: (i) we generate data samples related to local feature points based on the self-learning idea; (ii) we use the convolutional neural network (CNN) and Siamese network architecture to measure the similarity of the local position in the sonar image pair. Our method encapsulates the feature extraction and feature matching stage in a model, and directly learns the mapping function from image patch pairs to matching labels, and achieves matching tasks in a near-end-to-end manner. Feature matching experiments are carried out on the sonar images acquired by autonomous underwater vehicle (AUV) in the real underwater environment. Experiment results show that our method has better matching effects and strong robustness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call