Substantial training data is necessary to train an effective generative adversarial network(GANs), without which the discriminator is easily overfitting, causing the sub-optimal models. To solve these problems, this work explores the Frequency-domain Negative Sample Mining in Contrastive learning (FNContra) to improve data efficiency, which requires the discriminator to differentiate the definite relationships between the negative samples and real images. Concretely, this work first constructs multiple-level negative samples in the frequency domain and then proposes Discriminated Wavelet-instance Contrastive Learning (DWCL) and Generated Wavelet-prototype Contrastive Learning (GWCL). The former helps the discriminator learn the fine-grained texture features, and the latter impels the generated feature distribution to be close to real. Considering the learning difficulty of multi-level negative samples, this work proposes a dynamic weight driven by self-information, which ensures the resultant force is positive from the multi-level negative samples during the training. Finally, this work performs experiments on eleven datasets with different domains and resolutions. The quantitative and qualitative results demonstrate the superiority and effectiveness of the FNContra trained on limited data, and it indicates that FNContra can synthesize high-quality images. Notably, FNContra achieves the best FID scores on 10 out of 11 datasets, with improvements of 17.90 and 29.24 on Moongate and Shells, respectively, compared to the baseline. The code can be found at https://github.com/YQX1996/FNContra.
Read full abstract