Abstract

Classification of remote sensing scene image (RSSI) has been broadly applied and has attracted increasing attention. However, scene classification methods based on convolutional neural networks (CNNs) require a large number of manually labeled samples as training data, which is time-consuming and costly. Therefore, generating labeled data becomes a practical approach. However, conventional scene generation based on generative adversarial networks (GANs) involve some significant limitations, such as distortion and limited size. To solve the mentioned problems, herein, we propose a method of RSSI generation using element geometric transformation and GAN-based texture synthesis. Firstly, we segment the RSSI, extracting the element information in the RSSI. Then, we perform geometric transformations on the elements and extract the texture information in them. After that, we use the GAN-based method to model and generate the texture. Finally, we fuse the transformed elements with the generated texture to obtain the generated RSSI. The geometric transformation increases the complexity of the scene. The GAN-based texture synthesis ensures the generated scene image is not distorted. Experimental results demonstrate that the RSSI generated by our method achieved a better visual effect than a GAN model. In addition, the performance of CNN classifiers was reduced by 0.44–3.41% on the enhanced data set, which is partly attributed to the complexity of the generated samples. The proposed method was able to generate diverse scene data with sufficient fidelity under conditions of small sample size and solve the accuracy saturation issues of the public scene data sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call