Abstract

Although recent deep generative models are able to generate high-resolution, diverse natural samples from complex datasets, the generated samples still exist some problems in terms of images structure and detailed texture. In this paper, we propose a novel network architecture–SEDA-GAN that can learn the potential relationship in the dimension of the channel to enhance the generation performance of GAN. The proposed architecture applies Squeeze-and-Excitation(SE) block for feature recalibration to model channel-interdependencies within the GAN feature, and it also incorporates a dual-attention(DA) model with a channel attention mechanism in the GAN framework that can obtain global dependencies between channels. After conducting some comparative experiments on CIFAR and ImageNet datasets by using model BIGGAN as a baseline, our model performance has a certain improvement when evaluating on Fréchet Inception Distance(FID) and Inception Score(IS) respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call