Abstract

Multitemporal optical remote sensing image registration is still a challenging problem for current feature-based image registration algorithms due to the complex nonlinear discrepancies arising from diverse factors, including illumination, weather, and surface condition changes. To address the issue, this article attempts to combine the dual receptive field descriptors (DRFDs) constructed by a novel deep convolutional network. In addition, a novel inner loss function (ILF) that imposes constraints on the intermediate descriptors is adopted in order to consolidate the distinguishability of the descriptors when the overlapping areas of the input image patches are large. Subsequently, the dual feature distance maps (DFDMs) are built on the basis of the DRFDs and combined with features from accelerated segment test (FAST) key points for efficient and accurate correspondence establishment across the source image and the target image. Eventually, an iterative algorithm is proposed to remove the possible outliers. Experiments show that the combination of DRFDs trained with the ILF performs better than current learnable local descriptors, such as L2-Net, HardNet, and SOSNet. The image registration results using our method are more accurate than the methods based on learnable descriptors, such as L2-Net, HardNet, and SOSNet, and handcrafted descriptors, such as scale-invariant feature transform (SIFT), SURF, and ORB.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call