Abstract

The convolutional neural network (CNN) has achieved great success in the field of scene classification. Nevertheless, strong spatial information in CNN and irregular repetitive patterns in synthetic aperture radar (SAR) images make the feature descriptors less discriminative for scene classification. Aiming at providing more discriminative feature representations for SAR scene classification, a generalized compact channel-boosted high-order orderless pooling network (GCCH) is proposed. The GCCH network includes four parts, namely the standard convolution layer, second-order generalized layer, squeeze and excitation block, and the compact high-order generalized orderless pooling layer. Here, all of the layers are trained by back-propagation, and the parameters enable end-to-end optimization. First of all, the second-order orderless feature representation is acquired by the parameterized locality constrained affine subspace coding (LASC) in the second-order generalized layer, which cascades the first and second-order orderless feature descriptors of the output of the standard convolution layer. Subsequently, the squeeze and excitation block is employed to learn the channel information of parameterized LASC statistic representation by explicitly modelling interdependencies between channels. Lastly, the compact high-order orderless feature descriptors can be learned by the kernelled outer product automatically, which enables low-dimensional but highly discriminative feature descriptors. For validation and comparison, we conducted extensive experiments into the SAR scene classification dataset from TerraSAR-X images. Experimental results illustrate that the GCCH network achieves more competitive performance than the state-of-art network in the SAR image scene classification task.

Highlights

  • With the rapid development of various synthetic aperture radar (SAR) sensors, large amounts of high-quality SAR remote sensing images have been produced

  • The evaluation of the generalized compact channel-boosted high-order orderless pooling network (GCCH) network is tested on the TerraSAR-X database; this dataset contains 5000 scene images from 10 classes, and each class consists of 500 images with the size of 256 × 256

  • The evaluation of the GCCH network is tested on the TerraSAR-X database; this dataset contains 5000 scene images from 10 classes, and each class consists of 500 images with the size of 256 1 02o5f 168

Read more

Summary

Introduction

With the rapid development of various synthetic aperture radar (SAR) sensors, large amounts of high-quality SAR remote sensing images have been produced. Nary of affine subspaces and second-order coding based on information geometry, which has proven helpful to improve classification accuracy. Feature coding of each subspace, its first-order LASC vector takes the following form: where d (y,Si ) is defined as the exponentiated Euclidean distance, and is obtained by crossvalidation. Cascading the feature coding of each subspace, its first-order LASC vector takes the following form:. We can derive the second-order coding form of LASC by the Fisher information, define the gradient vector of the likelihood function as gSi = ∇Si log p(zi|Si ). Based on Fisher’s information theory, the Fisher vector of zi is defined as fSi. The second-order LASC is calculated as a weighted Fisher vector, which is formulated as f = ω(y, S1) fST1 , . We concatenate the first and second-order LASC statistical information, and we will acquire the entire LASC feature representation

Generalized Compact Channel-Boosted High-Order Orderless Pooling Network
Method
Date Set
Comparison with Other Mid-Level Feature Representation Methods
The Ablation and Combined Experiments
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.