Abstract
Generative adversarial networks (GANs) gained tremendous growth due to its potential and efficacy in producing realistic samples. This study proposes a light-weight GAN (LiWGAN) to learn the non-image synthesis with minimum computational time for less power computing. Hence, the LiWGAN method enhanced a new skip-layer channel-wise excitation module (SLE) and a self-supervised discriminator design for the non-synthesis performance using the facemask dataset. The facemask is one of the preventative strategies pioneered by the current COVID-19 pandemic. LiWGAN manipulates a non-image synthesis of facemask that could be beneficial for some researchers to identify an individual using lower power devices, occlusion challenges for face recognition, and alleviate the accuracy challenges due to limited datasets. The performance compared the processing time for a facemask dataset in terms of batch sizes and image resolutions. The Fréchet inception distance (FID) was also measured on the facemask images to evaluate the quality of the augmented image using LiWGAN. The findings for 3000 generated images showed a nearly similar FID score at 220.43 with significantly less processing time per iteration at 1.03s than StyleGAN at 219.97 FID score. One experiment was conducted using the CelebA dataset to compare with GL-GAN and DRAGAN, proving LiWGAN is appropriate for other datasets. The outcomes found LiWGAN performed better than GL-GAN and DRAGAN at 91.31 FID score with 3.50s processing time per iteration. Therefore, LiWGAN could aim to enhance the FID score to be near zero in the future with less processing time by using different datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.