Abstract

Optical and Synthetic Aperture Radar (SAR) fusion is addressed in this paper. Intensity–Hue–Saturation (IHS) is an easily implemented fusion method and can separate Red–Green–Blue (RGB) images into three independent components; however, using this method directly for optical and SAR images fusion will cause spectral distortion. The Gradient Transfer Fusion (GTF) algorithm is proposed firstly for infrared and gray visible images fusion, which formulates image fusion as an optimization problem and keeps the radiation information and spatial details simultaneously. However, the algorithm assumes that the spatial details only come from one of the source images, which is inconsistent with the actual situation of optical and SAR images fusion. In this paper, a fusion algorithm named IHS-GTF for optical and SAR images is proposed, which combines the advantages of IHS and GTF and considers the spatial details from the both images based on pixel saliency. The proposed method was assessed by visual analysis and ten indices and was further tested by extracting impervious surface (IS) from the fused image with random forest classifier. The results show the good preservation of spatial details and spectral information by our proposed method, and the overall accuracy of IS extraction is 2% higher than that of using optical image alone. The results demonstrate the ability of the proposed method for fusing optical and SAR data effectively to generate useful data.

Highlights

  • With the rapid development of Earth observation technology, various remote sensing sensors have begun to play a role, bringing a wealth of available data for research [1]

  • Synthetic Aperture Radar (SAR) images can well maintain the contour of ground objects, and optical images can well maintain the spectral information of the ground objects, the ideal results of optical and SAR images fusion should look like the optical image after image enhancement

  • Through the analysis and comparison of user’s accuracy (UA) and producer’s accuracy (PA) of land cover types, we found that the improvement of classification results of IHS-Gradient Transfer Fusion (GTF) fusion images is mainly concentrated in bright impervious surface (BIS) and bare soil (BS); there is no improvement in dark impervious surface (DIS), or it is even lower than for Sentinel-2A; and there is no significant improvement in VG and water bodies (WB)

Read more

Summary

Introduction

With the rapid development of Earth observation technology, various remote sensing sensors have begun to play a role, bringing a wealth of available data for research [1]. Hyperspectral remote sensing has a high spectral resolution, while its spatial resolution is low For another example, SAR image is difficult to be interpreted and its application is limited because of the inherent speckle [2]. With the increasing complexity of observation tasks and the high heterogeneity of observation scenes [3], information from a single data source cannot meet the requirements and data from different images need to be collected and combined into a single image in order to extract additional information [4] At this point, image fusion can come into play. Image fusion at the pixel level is combining two or more images covering the same scene into a high-quality image through a certain algorithm [5]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call