Abstract

ABSTRACT This paper presents a novel study: remote-sensing image translation between high-resolution optical and Synthetic Aperture Radar (SAR) data through machine learning approaches. To this end, conditional Generative Adversarial Networks (cGANs) have been proposed with the guide of high-level image features. Efficiency of the proposed methods have been verified with different SAR parameters on three regions from the world: Toronto, Vancouver in Canada and Shanghai in China. The generated SAR and optical images have been evaluated by pixel-based image classification with detailed land cover types including: low and high-density residential area, industry area, construction site, golf course, water, forest, pasture and crops. Results showed that the translated image could effectively keep many land cover types with compatible classification accuracy to the ground truth data. In comparison with state-of-the-art image translation approaches, the proposed methods could improve the translation results under the criteria of common similarity indicators. This is one of first study on multi-source remote-sensing data translation by machine learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call