Abstract

Due to its all time capability, synthetic aperture radar (SAR) remote sensing plays an important role in Earth observation. The ability to interpret the data is limited, even for experts, as the human eye is not familiar to the impact of distance-dependent imaging, signal intensities detected in the radar spectrum as well as image characteristics related to speckle or steps of post-processing. This paper is concerned with machine learning for SAR-to-optical image-to-image translation in order to support the interpretation and analysis of original data. A conditional adversarial network is adopted and optimized in order to generate alternative SAR image representations based on the combination of SAR images (starting point) and optical images (reference) for training. Following this strategy, the focus is set on the value of empirical knowledge for initialization, the impact of results on follow-up applications, and the discussion of opportunities/drawbacks related to this application of deep learning. Case study results are shown for high resolution (SAR: TerraSAR-X, optical: ALOS PRISM) and low resolution (Sentinel-1 and -2) data. The properties of the alternative image representation are evaluated based on feedback from experts in SAR remote sensing and the impact on road extraction as an example for follow-up applications. The results provide the basis to explain fundamental limitations affecting the SAR-to-optical image translation idea but also indicate benefits from alternative SAR image representations.

Highlights

  • Synthetic aperture radar (SAR) sensors constitute an important source of information in Earth observation as they allow planning data takes in a reliable manner

  • Considering the current state-of-the-art, this paper investigates the utilization of an adapted version of the CycleGAN architecture [8] for the synthetic aperture radar (SAR)-to-optical image translation task and the value of domain knowledge

  • We can observe that the SAR images are translated into a new domain with features similar to the optical references

Read more

Summary

Introduction

Synthetic aperture radar (SAR) sensors constitute an important source of information in Earth observation as they allow planning data takes in a reliable manner. In consideration of the above points, it can be expected that users of SAR data may wish to use additional means on occasion to facilitate the interpretation of SAR images This paper follows this idea and investigates the potential of generative deep learning models in the context of SAR image interpretation and the impact of data initialization. A compensation of range-dependent image distortion requires prior knowledge about the geometry of scenes, e.g., based on digital surface models [4] This information is not available in the default SAR-to-optical translation task. In contrast to related work on SAR-to-optical image-to-image translation (see Section 2), we hypothesize that it is not possible to go the full path from SAR to actual optical data, and that the translation will always end up at a certain point in between.

Related Work
Aspects of Interest
The CycleGAN Architecture
Optimization Steps
Data for Case Study
Study Set Up
Results and Comparison
Support of Interpretation
Extraction of Features
Combination of Features and Context
Conclusions and Outlook
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call