Abstract

Online maps play an essential role in modern life. The convenience of acquiring remote sensing images provides reliable geographic information sources for the compilation of online maps. Some existing works have used the idea of domain mapping to translate remote sensing images into maps directly, which is of great prospect for application. However, many of the current remote sensing image-to-map translation works are performed in an unsupervised manner that would lead to problems such as distortion and local detail inaccuracy. Although the fully-supervised method is effective, it requires plenty of paired as well as matched data for training. Paired remote sensing images and maps with consistent spatial locations can be easily accessed through online map services, whereas many pairs of samples in which some geographic element information is not accurately and completely matched. Supervised learning-based translation models are often confused by these unmatched data. Accurate and complete matched data has to be selected deliberately by humans, and the manual selection process is time-consuming and laborious, which brings new challenges. Therefore, we propose a novel remote sensing image-to-map translation model named Semi-MapGen based on semi-supervised generative adversarial networks (GAN), which requires only a small set of accurate and complete matched data and plenty of unpaired data. In this model, we apply a knowledge extension-based learning strategy that can improve the accuracy of translated maps. In addition, we design Expansion loss and Channel-wise loss to learn the information from massive unpaired data in an unsupervised manner. Qualitative and quantitative experiment results on three datasets demonstrate that the proposed model outperforms state-of-the-art semi-supervised and supervised methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call