Abstract

In this paper, we propose a matching network for gradually estimating the geometric transformation parameters between two aerial images taken in the same area but in different environments. To precisely matching two aerial images, there are important factors to consider such as different time, a variation of viewpoint, size, and rotation. The conventional methods for matching aerial image pairs with the large variations are extremely time-consuming process and have the limitations finding correct correspondences, because the image gradient and grayscale intensity for generating the feature descriptors are not robust to the variations. We design the network architecture as an end-to-end trainable deep neural network to reflect the characteristics of aerial images. The hierarchical structures that orderly estimate the rotation and the affine transformations make it possible to reduce the range of predictions and minimize errors caused by misalignment, resulting in more precise matching performance. Furthermore, we apply transfer learning to make the feature extraction networks more robust and suitable for the aerial image domain with the large variations. For the experiment, we apply the remote sensing image datasets from Google Earth and International Society for Photogrammetry and Remote Sensing (ISPRS). To evaluate our method quantitatively, we measure the probability of correct keypoints (PCK) metrics for objectively comparing the degree of matching. In terms of qualitative and quantitative assessment, our method demonstrates the state-of-the-art performances compared to the existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call