Abstract

Abstract. In this paper, we present a work-flow to investigate the joint visibility between very-high-resolution SAR and optical images of urban scenes. For this task, we extend the simulation framework SimGeoI to enable a simulation of individual pixels rather than complete images. Using the extended SimGeoI simulator, we carry out a case study using a TerraSAR-X staring spotlight image and a Worldview-2 panchromatic image acquired over the city of Munich, Germany. The results of this study indicate that about 55 % of the scene are visible in both images and are thus suitable for matching and data fusion endeavours, while about 25 % of the scene are affected by either radar shadow or optical occlusion. Taking the image acquisition parameters into account, our findings can provide support regarding the definition of upper bounds for image fusion tasks, as well as help to improve acquisition planning with respect to different application goals.

Highlights

  • One of the most important examples for the exploitation of complementary information from different remote sensing sensors is the joint use of synthetic aperture radar (SAR) and optical data (Tupin, 2010, Schmitt et al, 2017)

  • While the study demonstrated the general feasibility of sparse SARoptical stereogrammetry of urban scenes, it brought to light the difficulties involved with robust tie-point matching in the domain of VHR remote sensing imagery

  • As it was shown by (Qiu et al, 2018) in order to have favourable conditions for stereogrammetry, the baseline between the sensors should be as small as possible. This small baseline is favourable for joint visibility. It ensures that the radar shadow overlaps with the points which are occluded in the optical images, decreasing the non-visible regions

Read more

Summary

Introduction

One of the most important examples for the exploitation of complementary information from different remote sensing sensors is the joint use of synthetic aperture radar (SAR) and optical data (Tupin, 2010, Schmitt et al, 2017). The challenge of fusing SAR and optical data is greatest when data of very high spatial resolutions covering complex built-up areas are to be fused One example for this is very-high-resolution (VHR) multi-sensor stereogrammetry as discussed by (Qiu et al, 2018). While the study demonstrated the general feasibility of sparse SARoptical stereogrammetry of urban scenes, it brought to light the difficulties involved with robust tie-point matching in the domain of VHR remote sensing imagery These difficulties, which had been discussed by (Zhang, 2010, Dalla Mura et al, 2015, Schmitt and Zhu, 2016) before, are caused by the vastly different imaging geometries of SAR and optical images. These incorrectly matched pixels will lead to a degraded, and sometimes meaningless, fusion product

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call