With powerful feature representations, convolutional neural networks (CNNs) have produced tremendous achievements in image classification tasks and, typically, entail millions of labeled samples to train massive parameters. However, the sample labeling of synthetic aperture radar (SAR) images is extremely difficult, especially pixelwise labels, and has, sometimes, required field trips to accomplish labeling. Moreover, the inherent speckle noise may weaken the ability of networks to extract effective features from SAR images. In this article, we address these issues by labeling a few patchwise samples and propose Ridgelet-Nets with speckle reduction regularization for SAR image scene classification by combining deep learning with multiscale geometric analysis and statistical modeling of SAR images. First, we design Ridgelet-Nets with convolutional kernels constructed by ridgelet filters to reduce the training parameters and learn more discriminative features. Then, we embed speckle reduction regularization in the Ridgelet-Nets to restrain the influence of speckle noise and smooth the classification maps, in which the prior information of SAR image statistical modeling is introduced. Finally, we propose an adaptive SAR image scene classification framework based on an extended hierarchical visual semantic model, considering the differences in the structures and spatial relationships of different regions in the SAR images, particularly large-scale and complex scenes. Experimental results on real SAR images demonstrate that the proposed framework can achieve preferable classification performance using very limited labeled samples.
Read full abstract