Abstract

Image fusion is an effective complementary method to obtain information from multi-source data. In particular, the fusion of synthetic aperture radar (SAR) and panchromatic images contributes to the better visual perception of objects and compensates for spatial information. However, conventional fusion methods fail to address the differences in imaging mechanism and, therefore, they cannot fully consider all information. Thus, this paper proposes a novel fusion method that both considers the differences in imaging mechanisms and sufficiently provides spatial information. The proposed method is learning-based; it first selects data to be used for learning. Then, to reduce the complexity, classification is performed on the stacked image, and the learning is performed independently for each class. Subsequently, to consider sufficient information, various features are extracted from the SAR image. Learning is performed based on the model’s ability to establish non-linear relationships, minimizing the differences in imaging mechanisms. It uses a representative non-linear regression model, random forest regression. Finally, the performance of the proposed method is evaluated by comparison with conventional methods. The experimental results show that the proposed method is superior in terms of visual and quantitative aspects, thus verifying its applicability.

Highlights

  • Various high-resolution satellite sensors have increasingly been developed, especially the synthetic aperture radar (SAR) imaging sensor, which has an important advantage in Earth observations [1,2]

  • The ATWD method is based on the importance of the wavelet coefficient, which is incorporated into the SAR image at a certain high frequency

  • The non-subsampled contourlet transform (NSCT) method is based on the contourlet transform without downsamplers and upsamplers, and it selects the averaging scheme at a low frequency and the maximum scheme at high frequency

Read more

Summary

Introduction

Various high-resolution satellite sensors have increasingly been developed, especially the synthetic aperture radar (SAR) imaging sensor, which has an important advantage in Earth observations [1,2]. It is an active sensor that provides its own source of illumination, which is independent of solar illumination and is not affected by daylight or night darkness [3]. It can penetrate through atmospheric effects, allowing for Earth observation regardless of weather conditions such as rain, fog, smoke, or clouds [4,5]. Information contained in a SAR image depends on the backscattering characteristics of the surface targets and is sensitive to the geometry of the targets [6]. The image provides information on surface roughness, object shape, orientation, or moisture content [7,8]. Interpreting the details in SAR images is a challenging task for several reasons:

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call