Abstract

Obtaining near real-time road features is very important in emergent situations like flood and geological disaster cases. Remote sensing images with very high spatial resolution usually have many details in land use and land cover, which complicate the detection and extraction of road features. In this paper, we propose a deep residual deconvolutional network (Deep ResDCLnet), to extract road features from unmanned aerial vehicle (UAV) images. This proposed network is based on the deep neural network from SegNet architecture, the rich skip connection in a residual bottleneck, and the direct relationship among intermediate feature maps from the pixel deconvolution algorithm. It can improve the performance of a supervised learning model by differentiating and extracting complex road features on aerial photographs and UAV imagery. The proposed network is evaluated with the standard public Massachusetts road dataset and the UAV dataset collected alongside Yangtze River, and is compared with four state-of-art network architectures. The results show that the Deep ResDCLnet outperforms all four networks in terms of extraction accuracy, which demonstrates the effectiveness of the network in road extraction from very high spatial resolution imagery.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call