Abstract

Abstract. In this paper, we have shown an approach for the automatic colorization of SAR backscatter images, which are usually provided in the form of single-channel gray-scale imagery. Using a deep generative model proposed for the purpose of photograph colorization and a Lab-space-based SAR-optical image fusion formulation, we are able to predict artificial color SAR images, which disclose much more information to the human interpreter than the original SAR data. Future work will aim at further adaption of the employed procedure to our special case of multi-sensor remote sensing imagery. Furthermore, we will investigate if the low-level representations learned intrinsically by the deep network can be used for SAR image interpretation in an end-to-end manner.

Highlights

  • Synthetic aperture radar (SAR) images are completely different from optical images in terms of both geometric and radiometric appearance: While SAR is a range-based imaging modality and measures physical properties of the observed scene, optical imagery basically represents an angular measurement system and collects information about the chemical characteristics of the environment

  • For evaluation of the colorization capabilities of the architecture described in Section 4, we order the mixture density network (MDN)-predicted gaussian mixture model means μi in descending order based on the mixture weights πi and display the results of the top-8 means for some example images of our test set comprising 1024 images

  • The β parameters are the default parameters recommended for the Adam optimization algorithm, while the learning rates were based on the details provided by Deshpande et al (2017)

Read more

Summary

INTRODUCTION

Synthetic aperture radar (SAR) images are completely different from optical images in terms of both geometric and radiometric appearance: While SAR is a range-based imaging modality and measures physical properties of the observed scene, optical imagery basically represents an angular measurement system and collects information about the chemical characteristics of the environment. SAR image interpretation can be alleviated when optical colors are used to support the interpretation process This has been a special case of remote sensing image fusion (Pohl and van Genderen, 1998; Schmitt and Zhu, 2016). To overcome the need for accompanying optical imagery, this paper proposes to learn feasible colorizations of Sentinel-1 SAR images from coregistered Sentinel-2 training examples using deep learning techniques. This is meant to provide a significant step in SAR-optical data fusion (Schmitt et al, 2017) with application to improved SAR image understanding, and will enable SAR data providers to attach colorized versions of their imagery to their products.

SAR-OPTICAL IMAGE FUSION BY COLOR SPACE TRANSFORM
THE DATASET
Deep Generative Architecture
Implementation Details
Training
COLORIZATION RESULTS
SUMMARY AND CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.