Abstract

Road network extraction from remotely sensed imagery has become a powerful tool for updating geospatial databases, owing to the success of convolutional neural network (CNN) based deep learning semantic segmentation techniques combined with the high-resolution imagery that modern remote sensing provides. However, most CNN approaches cannot obtain high precision segmentation maps with rich details when processing high-resolution remote sensing imagery. In this study, we propose a generative adversarial network (GAN)-based deep learning approach for road segmentation from high-resolution aerial imagery. In the generative part of the presented GAN approach, we use a modified UNet model (MUNet) to obtain a high-resolution segmentation map of the road network. In combination with simple pre-processing comprising edge-preserving filtering, the proposed approach offers a significant improvement in road network segmentation compared with prior approaches. In experiments conducted on the Massachusetts road image dataset, the proposed approach achieves 91.54% precision and 92.92% recall, which correspond to a Mathews correlation coefficient (MCC) of 91.13%, a Mean intersection over union (MIOU) of 87.43% and a F1-score of 92.20%. Comparisons demonstrate that the proposed GAN framework outperforms prior CNN-based approaches and is particularly effective in preserving edge information.

Highlights

  • Compared with aerial images that are typically restricted to three red, green, and blue (RGB) spectral channels and available for limited geographic areas, satellite imagery commonly includes further spectral channels and provides almost worldwide coverage at high resolution [1]

  • To provide context for our presentation, we summarize previous works using deep learning approaches for road extraction in remote sensing images Wang, et al [25] described a semi-automatic approach based on a deep convolutional neural network (DNN) and finite state machine (FSM) consisting of two principal stages, namely, training and tracking for road extraction from high-resolution remote sensing images

  • Compared to prior generative adversarial network (GAN)-based road extraction approaches such as GAN+fully connected network (FCN) proposed by [32], GAN+SegNet presented by [21], Ensemble Wasserstein Generative Adversarial Network (E-WGAN) proposed by [33], Multi-supervised Generative Adversarial Network (MsGAN) performed by [34], and Multi-conditional Generative Adversarial Network (McGAN) implemented by [35], we introduce the modified U-Net model (MUNet) for the generative term to create a high-resolution smooth segmentation map, with high spatial consistency and clear segmentation boundaries

Read more

Summary

Introduction

Compared with aerial images that are typically restricted to three red, green, and blue (RGB) spectral channels and available for limited geographic areas, satellite imagery commonly includes further spectral channels and provides almost worldwide coverage at high resolution [1]. High-resolution remote sensing imagery is an attractive option for extracting road segments to aid the development of maps for geospatial information systems (GIS) users, transportation practitioners, geodetic researchers, and urban/municipal planners and officers [2]–[4]. Per pixel resolution for high-quality satellite images is worse than the resolution for aerial images, it is adequate for extracting road sections. Automatic means are necessary for accurately extracting road segments from high-resolution remote sensing imagery [8].

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call