Abstract

Accurate translation of aerial imagery to maps is a direction of great value and challenge in mapping, a method of generating maps that does not require using vector data as traditional mapping methods do. The tremendous progress made in recent years in image translation based on generative adversarial networks has led to rapid progress in aerial image-to-map translation. Still, the generated results could be better regarding quality, accuracy, and visual impact. This paper proposes a supervised model (SAM-GAN) based on generative adversarial networks (GAN) to improve the performance of aerial image-to-map translation. In the model, we introduce a new generator and multi-scale discriminator. The generator is a conditional GAN model that extracts the content and style space from aerial images and maps and learns to generalize the patterns of aerial image-to-map style transformation. We introduce image style loss and topological consistency loss to improve the model’s pixel-level accuracy and topological performance. Furthermore, using the Maps dataset, a comprehensive qualitative and quantitative comparison is made between the SAM-GAN model and previous methods used for aerial image-to-map translation in combination with excellent evaluation metrics. Experiments showed that SAM-GAN outperformed existing methods in both quantitative and qualitative results.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.