Abstract

ABSTRACTWith the launch of various remote-sensing satellites, more and more high-spatial resolution remote-sensing (HSR-RS) images are becoming available. Scene classification of such a huge volume of HSR-RS images is a big challenge for the efficiency of the feature learning and model training. The deep convolutional neural network (CNN), a typical deep learning model, is an efficient end-to-end deep hierarchical feature learning model that can capture the intrinsic features of input HSR-RS images. However, most published CNN architectures are borrowed from natural scene classification with thousands of training samples, and they are not designed for HSR-RS images. In this paper, we propose an agile CNN architecture, named as SatCNN, for HSR-RS image scene classification. Based on recent improvements to modern CNN architectures, we use more efficient convolutional layers with smaller kernels to build an effective CNN architecture. Experiments on SAT data sets confirmed that SatCNN can quickly and effectively learn robust features to handle the intra-class diversity even with small convolutional kernels, and the deeper convolutional layers allow spontaneous modelling of the relative spatial relationships. With the help of fast graphics processing unit acceleration, SatCNN can be trained within about 40 min, achieving overall accuracies of 99.65% and 99.54%, which is the state-of-the-art for SAT data sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call