Abstract

Semantic segmentation of high-resolution aerial images is of great importance in certain fields, but the increasing spatial resolution brings large intra-class variance and small inter-class differences that can lead to classification ambiguities. Based on high-level contextual features, the deep convolutional neural network (DCNN) is an effective method to deal with semantic segmentation of high-resolution aerial imagery. In this work, a novel dense pyramid network (DPN) is proposed for semantic segmentation. The network starts with group convolutions to deal with multi-sensor data in channel wise to extract feature maps of each channel separately; by doing so, more information from each channel can be preserved. This process is followed by the channel shuffle operation to enhance the representation ability of the network. Then, four densely connected convolutional blocks are utilized to both extract and take full advantage of features. The pyramid pooling module combined with two convolutional layers are set to fuse multi-resolution and multi-sensor features through an effective global scenery prior manner, producing the probability graph for each class. Moreover, the median frequency balanced focal loss is proposed to replace the standard cross entropy loss in the training phase to deal with the class imbalance problem. We evaluate the dense pyramid network on the International Society for Photogrammetry and Remote Sensing (ISPRS) Vaihingen and Potsdam 2D semantic labeling dataset, and the results demonstrate that the proposed framework exhibits better performances, compared to the state of the art baseline.

Highlights

  • In the past few years, image analysis has benefited from deep convolutional neural networks (DCNN), which have been widely applied in image processing tasks, ranging from image classification to object recognition, image super-resolution and semantic segmentation [1,2,3]

  • We evaluated the proposed network, Vaihingen and Potsdam datasets provided by International Society for Photogrammetry and Remote Sensing (ISPRS) [31], on the benchmark of high-resolution aerial image labeling

  • The Vaihingen dataset is comprised of 33 true orthophoto (TOP) at a spatial resolution of 9 cm, of which

Read more

Summary

Introduction

In the past few years, image analysis has benefited from deep convolutional neural networks (DCNN), which have been widely applied in image processing tasks, ranging from image classification to object recognition, image super-resolution and semantic segmentation [1,2,3]. Some methods [11,19,20] stacked multi-sensor data, including CIR images and LiDAR data, as one input vector to train the networks This rough combination of multi-sensor data at the first layer leads to classification ambiguities of certain objects and may suffer from an information loss problem. The existing DCNNs for semantic segmentation of high-resolution remotedata sensing images suffer betterfrom thanthe any single data source To overcome these problems, we propose a dense network insufficient spatial and contextual information, as the multi-sensor data fusionpyramid is not always (DPN). The architecture of the proposed included three(3)main parts: (1)pooling the group convolutions convolutions for high-level semanticDPN feature extraction; the pyramid operation for multi-sensor multi-resolution feature fusion.

GroupMulti-sensor
Densely
Pyramid
Median Frequency Balanced Focal Loss
Training and Inference Strategy of DPN
Dataset and Evaluation Metrics
Ablation Study
Methods
Vaihingen Dataset
Qualitative comparison other competitors’
Potsdam
Potsdam Dataset
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.