Abstract

Synthetic aperture radar (SAR) image classification is a hot topic in the interpretation of SAR images. However, the absence of effective feature representation and the presence of speckle noise in SAR images make classification difficult to handle. In order to overcome these problems, a deep convolutional autoencoder (DCAE) is proposed to extract features and conduct classification automatically. The deep network is composed of eight layers: a convolutional layer to extract texture features, a scale transformation layer to aggregate neighbor information, four layers based on sparse autoencoders to optimize features and classify, and last two layers for postprocessing. Compared with hand-crafted features, the DCAE network provides an automatic method to learn discriminative features from the image. A series of filters is designed as convolutional units to comprise the gray-level cooccurrence matrix and Gabor features together. Scale transformation is conducted to reduce the influence of the noise, which integrates the correlated neighbor pixels. Sparse autoencoders seek better representation of features to match the classifier, since training labels are added to fine-tune the parameters of the networks. Morphological smoothing removes the isolated points of the classification map. The whole network is designed ingeniously, and each part has a contribution to the classification accuracy. The experiments of TerraSAR-X image demonstrate that the DCAE network can extract efficient features and perform better classification result compared with some related algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call