Abstract

Deep neural networks (DNNs) face many problems in the very high resolution remote sensing (VHRRS) per-pixel classification field. Among the problems is the fact that as the depth of the network increases, gradient disappearance influences classification accuracy and the corresponding increasing number of parameters to be learned increases the possibility of overfitting, especially when only a small amount of VHRRS labeled samples are acquired for training. Further, the hidden layers in DNNs are not transparent enough, which results in extracted features not being sufficiently discriminative and significant amounts of redundancy. This paper proposes a novel depth-width-reinforced DNN that solves these problems to produce better per-pixel classification results in VHRRS. In the proposed method, densely connected neural networks and internal classifiers are combined to build a deeper network and balance the network depth and performance. This strengthens the gradients, decreases negative effects from gradient disappearance as the network depth increases and enhances the transparency of hidden layers, making extracted features more discriminative and reducing the risk of overfitting. In addition, the proposed method uses multi-scale filters to create a wider neural network. The depth of the filters from each scale is controlled to decrease redundancy and the multi-scale filters enable utilization of joint spatio-spectral information and diverse local spatial structure simultaneously. Furthermore, the concept of network in network is applied to better fuse the deeper and wider designs, making the network operate more smoothly. The results of experiments conducted on BJ02, GF02, geoeye and quickbird satellite images verify the efficacy of the proposed method. The proposed method not only achieves competitive classification results but also proves that the network can continue to be robust and perform well even while the amount of labeled training samples is decreasing, which fits the small training samples situation faced by VHRRS per-pixel classification.

Highlights

  • In per-pixel classification of very high resolution remote sensing (VHRRS) images, each pixel is assigned a corresponding label representing the category to which it belongs

  • To solve the problems outlined above, we propose a novel DenseNet-based deep neural network (DNN) with multi-scale filters to extract features from VHRRS images and produce better classification results

  • We used the most popular overall accuracy (OA) and kappa as the criteria, in which OA represented the ratio of correct classified pixels to overall pixels and kappa was used for consistency check, serving as a criterion of the classification accuracy as well

Read more

Summary

Introduction

In per-pixel classification of very high resolution remote sensing (VHRRS) images, each pixel is assigned a corresponding label representing the category to which it belongs. Two feature extraction approaches exist: handcrafted [5,6,7,8,9] and deep learning-based [10,11,12]. In the deep learning-based approach intrinsic and hierarchical features are learned automatically from raw data. This often produces better results than the handcrafted based approach [13], which requires the laborious involvement of experts with a priori knowledge in feature design and selection.

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call