Abstract

AbstractUpscaling of the time‐varying volume data is significant, since it can be used in in situ visualization to help scientists fast analyse complex simulations relevant to time‐varying volume data. A novel deep learning method called Pix2Pix spatial super‐resolution (Pix2PixSSR), which can be used to generate spatial super‐resolution of the time‐varying volume data is proposed here. It consists of two main components: One is a variant UNet‐like generator that takes the low resolution volume sequence as input and generates the high resolution counterparts; one is a PatchGAN discriminator that takes both low and high resolution volume sequences as input and predicts their realness. To validate its advantages, we qualitatively and quantitatively compare it with the state‐of‐the‐art upscaling techniques. More specifically, two experiments are performed. The first experiment uses the same variable of a time‐varying volume dataset for training and inference, while the second experiment uses different variables for training and inference. The experimental results show that for most cases, the Pix2PixSSR can generate the most similar super‐resolution to the ground truth, compared to the state‐of‐the‐art techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call