Abstract

ABSTRACTPixel-based convolutional neural network (CNN) has demonstrated good performance in the classification of very high resolution images (VHRI) from which abstract deep features are extracted. However, conventional pixel-based CNN demands large resources in terms of processing time and disk space. Therefore, superpixel CNN classification has recently become a focus of attention. We therefore propose a CNN based deep learning method combining superpixels extracted via energy-driven sampling (SEEDS) for VHRI classification. The approach consists of three main steps. First, based on the concept of geographic object-based image analysis (GEOBIA), the image is segmented into homogeneous superpixels using the SEEDS based superpixel segmentation method thereby decreasing the number of processing units. Second, the training data and testing data are extracted from the image and concatenated on a superpixel level at a variety of scales for CNN. Third, the training data are input to train the parameters of CNN and abstract deep features are extracted from the VHRI. Using these extracted deep features, we classify two VHRI data sets at single scales and multiple scales. To verify the effectiveness of SEEDS based CNN classification, the performance of SEEDS and three others superpixel segmentation algorithms are compared, and the superpixel extraction via SEEDS method was found to be the optimal superpixel segmentation approach for CNN classification. The scale effect on CNN classification accuracy was investigated by comparing the four superpixel segmentation methods. We found that (1) There is no strong evidence that using scales combinations is better than a single scale in some specific situations; (2) Natural objects with low complexity are not as sensitive to scale as artificial objects; (3) For a simple VHRI that contains clear artificial objects and simple texture, the classification result with multiple scales performs better a the single scale; (4) In contrast, for the complex VHRI containing a large number of complex objects, the classification result with a single small-scale best.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call