Abstract

In this article, we propose an automatic and efficient method to solve optical and synthetic aperture radar (SAR) image registration using the improved phase congruency (PC) model. First, evenly distributed keypoints are extracted from the optical images via the block-Harris method. Complementary grid points are then selected in image regions with poor structural information and supplemented to the keypoint set. For each keypoint, a robust feature representation that captures the local spatial relationship is proposed based on the improved PC model. Specifically, we propose to use two different PC models, the classic PC and the SAR-PC, to construct features for optical and SAR images, respectively. The PC features of several directions are aggregated to construct the feature descriptors, and a similarity metric via the phase correlation of feature descriptors is obtained. The proposed similarity metric cannot only find accurate correspondence but also present efficient results without presetting the size of the search region. We compare the proposed method with two baselines and state-of-the-art (SOTA) methods, i.e., OS-SIFT, histogram of oriented PC, and channel features of oriented gradients, in various scenarios. The results show that the proposed method outperforms the baselines and shows comparable performance with SOTA methods in regions with abundant structural information and better performance in regions with less structural information. Moreover, we build a high-resolution optical and SAR image matching dataset, which consists of 10 692 nonoverlapping patch pairs of $256\times 256$ pixels and 1-m resolution. Results of two benchmarks, Siamese deep matching network, and conditional generative adversarial networks show that this dataset is practical and challenging.

Highlights

  • W ITH the explosive growth of commercial remote sensing images, various remote sensing applications start to Manuscript received March 31, 2020; revised June 12, 2020, July 22, 2020, and August 23, 2020; accepted September 21, 2020

  • We propose an automatic and efficient method to solve optical and synthetic aperture radar (SAR) image registration, and further utilize the proposed method to build a high-resolution deep learning dataset

  • The input optical image is divided into L × L nonoverlapping blocks, points in each block with the highest l Harris values are selected as keypoints

Read more

Summary

INTRODUCTION

W ITH the explosive growth of commercial remote sensing images, various remote sensing applications start to Manuscript received March 31, 2020; revised June 12, 2020, July 22, 2020, and August 23, 2020; accepted September 21, 2020. A few researchers proposed robust feature descriptors and advanced outlier-removal techniques to improve the matching accuracy in the fine registration stage. Compared with the conference paper, we integrate the proposed method into a registration framework, and use it to build a high-resolution image matching dataset for deep learning. An automatic and efficient framework for optical and SAR image registration is proposed, which modifies the keypoint detection stage by adding grid points and improves the keypoint matching stage by combining two designed features and efficient phase correlation. Via collecting and exploiting 20 pairs of optical and SAR scenes, we build the OS dataset of 10 692 patch pairs with 256 × 256 pixels and 1-m resolution, which is applicable to deep learning-based image matching/fusion tasks.

PC MODEL
Keypoint Detection
Keypoint Matching
CONSTRUCTION OF THE OS DATASET
EXPERIMENTAL RESULTS
Analysis of the Parameter Settings
Analysis of the Grid Points
Evaluation of the OS Dataset Based on Two Benchmarks
Findings
Limitations and Future Works
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call