Abstract
Generative Adversarial Networks (GAN) are approaches that are utilized for data augmentation, which facilitates the development of more accurate detection models for unusual or unbalanced datasets. Computer-assisted diagnostic methods may be made more reliable by using synthetic pictures generated by GAN. Generative adversarial networks are challenging to train because too unpredictable training dynamics may occur throughout the learning process, such as model collapse and vanishing gradients. For accurate and faster results the GAN network need to trained in parallel and distributed manner. We enhance the speed and precision of the Deep Convolutional Generative Adversarial Networks (DCGAN) architecture by using its parallelism and executing it on High-Performance Computing platforms. The effective analysis of a DCGAN in Graphic Processing Unit and Tensor Processing Unit platforms in which each layer execution pattern is analyzed. The bottleneck is identified for the GAN structure for each execution platforms. The Central Processing Unit is capable of processing neural network models, but it requires a great deal of time to do it. Graphic Processing Unit in contrast, side, are a hundred times quicker than CPUs for Neural Networks, however, they are prohibitively expensive compared to CPUs. Using the systolic array structure, TPU performs well on neural networks with high batch sizes but in GAN the shift between CPU and TPU is huge so it does not perform well.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have