Abstract

We propose a novel method for addressing the problem of efficiently generating a highly refined normal map for screen-space fluid rendering. Because the process of filtering the normal map is crucially important to ensure the quality of the final screen-space fluid rendering, we employ a conditional generative adversarial network (cGAN) as a filter that learns a deep normal map representation, thereby refining the low-quality normal map. In particular, we have designed a novel loss function dedicated to refining the normal map information, and we use a specific set of auxiliary features to train the cGAN generator to learn features that are more robust with respect to edge details. Additionally, we constructed a dataset of six different typical scenes to enable effective demonstrations of multitype fluid simulation. Experiments indicated that our generator was able to infer clearer and more detailed features for this dataset than a basic screen-space fluid rendering method. Moreover, in some cases, the results generated by our method were even smoother than those generated by the conventional surface reconstruction method. Our method improves the fluid rendering results via the high-quality normal map while preserving the advantages of the screen-space fluid rendering methods and the traditional surface reconstruction methods, including that of the computation time being independent of the number of simulation particles and the spatial resolution being related only to image resolution.

Highlights

  • Particle-based methods are often used for fluid simulation, and many rendering methods have been developed for drawing high-quality particle surfaces

  • We propose a conditional generative adversarial network (cGAN)-based filter to improve the results of screen-space fluid rendering effectively

  • We have proposed a method that uses deep learning to refine the normal map for screen-space fluid rendering

Read more

Summary

Introduction

Particle-based methods are often used for fluid simulation, and many rendering methods have been developed for drawing high-quality particle surfaces. Screen-space-based methods have certain problems such as the surface appearing convex and cases where the front and back of a particle are not distinguishable. In screen-space-based methods, the normal map is crucially important because it determines the shape and color of the rendered results. While considering screen-space rendering methods, we have been inspired by recent work in image generation using deep convolutional neural networks [3,4]. The state-of-the-art deep convolutional neural networks architecture has a trend targeting high performance. We propose a deep learning method that generates a highly refined normal map for fluid rendering and is based on conditional generative adversarial networks (cGANs) [5]. A cGAN leverages a GAN in a conditional setting that is not an approximation of the true data probability distribution but an inference of the conditional probability distribution

Objectives
Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call