Abstract

This paper proposes a novel deep neural network training procedure for matching optical image and synthetic aperture radar image. This deep neural network architecture uses convolutional Siamese neural network and visual saliency map to enhance convolutional Siamese neural network’s power of features. Computational visual saliency maps can be thought as an indicator for neural network architecture to emphasize some important neural network features. Firstly, well-known visual saliency map extraction algorithms have analyzed then discussed to determine fusion strategy with main neural network, convolutional Siamese neural network. Another core idea is about Siamese network. Two different Siamese networks have studied, Pseudo-Siamese CNN and Identical-Siamese CNN, and experiments are resulted in detail. Experiments show that computational visual saliency map can help to Siamese networks to select more informative features in matching process. Incorporation with visual saliency map increases matching accuracy whether Siamese network shares its weights or not.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call