Abstract

Feature learning-based polarimetric synthetic aperture radar (PolSAR) classification model will generally suffer from the challenge of deficient labeled pixels. In this paper, we propose a novel generative-discriminative network for PolSAR polar-spatial feature fusion learning and classification, which comprises of a deep generative network and a discriminative network with their bottom layers shared. With this architecture, it enables to make use of both labeled and unlabeled pixels in a PolSAR image for model learning in a semisupervised way. Moreover, the proposed network imposes a Gaussian random field prior and a conditional random field posterior on the learned fusion features and the output label configuration, respectively. Without the need of the complicated recurrent iterations, our network can still efficiently produce the structured fusion feature as well as a smoothed classification map by involving some auxiliary variables, and it is specifically optimized via variational inference within an alternating direction method of multipliers iteration scheme. Extensive experiments on different benchmark PolSAR imageries demonstrate the effectiveness and superiority of the proposed network. Compared with other state-of-the-art algorithms of PolSAR feature learning and classification, our model can achieve a much better performance in terms of the visual quality of the label map and overall classification accuracy, facilitating the much less labeling pixels.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call