Abstract

Inspired by enormous success of fully convolutional network (FCN) in semantic segmentation, as well as the similarity between semantic segmentation and pixel-by-pixel polarimetric synthetic aperture radar (PolSAR) image classification, exploring how to effectively combine the unique polarimetric properties with FCN is a promising attempt at PolSAR image classification. Moreover, recent research shows that sparse and low-rank representations can convey valuable information for classification purposes. Therefore, this paper presents an effective PolSAR image classification scheme, which integrates deep spatial patterns learned automatically by FCN with sparse and low-rank subspace features: (1) a shallow subspace learning based on sparse and low-rank graph embedding is firstly introduced to capture the local and global structures of high-dimensional polarimetric data; (2) a pre-trained deep FCN-8s model is transferred to extract the nonlinear deep multi-scale spatial information of PolSAR image; and (3) the shallow sparse and low-rank subspace features are integrated to boost the discrimination of deep spatial features. Then, the integrated hierarchical subspace features are used for subsequent classification combined with a discriminative model. Extensive experiments on three pieces of real PolSAR data indicate that the proposed method can achieve competitive performance, particularly in the case where the available training samples are limited.

Highlights

  • Considering that very deep fully convolutional network (FCN) is weak in capturing the local details, and from the point of view of deep learning (DL), sparse and low-rank graph-based discriminant analysis (SLGDA) is a shallow subspace learning and lack of spatial constraints, we integrate deep spatial features of FCN with sparse and low-rank subspace features of SLGDA to make their advantages beneficial for each other

  • The integration of deep spatial features learned by FCN with the features learned via graph embedding discriminative analysis (GDA), whether it is block low-rank graph embedding discriminative analysis (BLGDA), block sparse graph embedding discriminative analysis (BSGDA) or BSLGDA, can greatly improve the accuracy almost for all classes, especially for Beet, Stem Bean and Potato, the accuracy is about 11%, 17%, and 4% higher than that of FCN, respectively

  • This paper has presented an effective classification scheme for polarimetric synthetic aperture radar (PolSAR) image, which integrates deep multi-scale spatial information learned by FCN-8s model with shallow sparse and low-rank subspace representations

Read more

Summary

Background

Synthetic Aperture Radar (SAR), characterized by almost all-weather, all-day imaging capability, has become increasingly important in various Earth observation applications. The region-based approaches can obviously improve classification results, they are shallow in architectures, can extract only low-level handcrafted spatial features of the original data, whose representation and discrimination abilities are usually limited. Zhou et al [20] employ a four-layer CNN to perform PolSAR image classification on six-channel real-valued data Following this idea, Zhang et al [21] propose a complex-valued CNN (CV-CNN), which extends the entire network into the complex domain, to fully utilize amplitude and phase information of complex SAR imagery. Fang et al [28] perform multiclasses classification for large area glacier based on two layers of sparse learning These methods fully take advantage of the local discrimination of sparse representations of features. Researchers in [31,32] combine SR with LRR to obtain the local and global structures of data simultaneously, which noticeably improve classification accuracy

Problems and Motivation
Contributions and Structure
Multidimensional PolSAR Data
Fully Convolutional Networks
Subspace Learning Based on Graph Embedding
Sparse and Low-Rank Subspace Representations of PolSAR Data
Deep Multi-Scale Spatial Features Learning via FCN-8s
Experimental Data Sets
Experiment Settings
Parameters Tuning
Classification Performance
Findings
Discussion
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call