Abstract

ABSTRACT Generative adversarial networks (GANs) have effectively promoted the development of hyperspectral image classification technology in generating samples. Many GAN-based models for hyperspectral image classification use deconvolution to generate fake samples, which will cause chequerboard artefacts and affect classification performance. Furthermore, the training of GANs still faces the problem of mode collapse. Aiming at the above problems, we proposed a dual hybrid convolutional generative adversarial network (DHCGAN) for hyperspectral image classification. Firstly, the combination of nearest neighbour upsampling and sub-pixel convolution is employed in the generator, which avoids the overlap of convolution domain and effectively suppresses the chequerboard artefacts caused by deconvolution. Secondly, the traditional convolution and dilated convolution are fused in the discriminator, which expands the receptive field without increasing parameters and achieves more effective feature extraction. In addition, some adaptive drop blocks are embedded into the generator and discriminator to effectively alleviate the problem of mode collapse. Experiments were performed on four hyperspectral datasets (including three classical datasets – Indian Pines, University of Pavia and Houston, a new dataset – WHU-Hi-HanChuan). Experimental results show that the proposed method can provide a certain performance improvement over some competing methods, such as the accuracy has been increased by more than 1% on the three classical datasets, and even got over 3% improvement on WHU_Hi_HanChaun dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call