Abstract

In recent years, the evolution of artificial intelligence, especially deep learning, has been remarkable, and its application to various fields has been growing rapidly. In this paper, I report the results of the application of generative adversarial networks (GANs), specifically video-to-video translation networks, to computational fluid dynamics (CFD) simulations. The purpose of this research is to reduce the computational cost of CFD simulations with GANs. The architecture of GANs in this research is a combination of the image-to-image translation networks (the so-called “pix2pix”) and Long Short-Term Memory (LSTM). It is shown that the results of high-cost and high-accuracy simulations (with high-resolution computational grids) can be estimated from those of low-cost and low-accuracy simulations (with low-resolution grids). In particular, the time evolution of density distributions in the cases of a high-resolution grid is reproduced from that in the cases of a low-resolution grid through GANs, and the density inhomogeneity estimated from the image generated by GANs recovers the ground truth with good accuracy. Qualitative and quantitative comparisons of the results of the proposed method with those of several super-resolution algorithms are also presented.

Highlights

  • Artificial intelligence is advancing rapidly and has come to be comparable to or outperform humans in several tasks

  • I show the results of time-series image-to-image translation for the training datasets first and explain the way to evaluate the quality of the synthesis images quantitatively

  • The testing datasets that were not used for training are input to the trained model, and the synthesis images are output from the generator

Read more

Summary

Introduction

Artificial intelligence is advancing rapidly and has come to be comparable to or outperform humans in several tasks. The agent trained by reinforcement learning is capable of reaching a level comparable to professional human game testers (Mnih et al, 2015). Radford et al (2016) applied deep convolutional neural networks to those two models, whose architecture is called deep convolutional GANs (DCGAN). Isola et al (2017) proposed the network learning the mapping from an input image to an output image to enable the translation between two images. This network, the so-called pix2pix, can convert black-and-white images into color images, line drawings into photo-realistic images, and so on

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.