Abstract

Implicit neural representation (INR) has surfaced as a promising direction for solving different scientific visualization tasks due to its continuous representation and flexible input and output settings. We present STSR-INR, an INR solution for generating simultaneous spatiotemporal super-resolution for multivariate time-varying volumetric data. Inheriting the benefits of the INR-based approach, STSR-INR supports unsupervised learning and permits data upscaling with arbitrary spatial and temporal scale factors. Unlike existing GAN- or INR-based super-resolution methods, STSR-INR focuses on tackling variables or ensembles and enabling joint training across datasets of various spatiotemporal resolutions. We achieve this capability via a variable embedding scheme that learns latent vectors for different variables. In conjunction with a modulated structure in the network design, we employ a variational auto-decoder to optimize the learnable latent vectors to enable latent-space interpolation. To combat the slow training of INR, we leverage a multi-head strategy to improve training and inference speed with significant speedup. We demonstrate the effectiveness of STSR-INR with multiple scalar field datasets and compare it with conventional tricubic+linear interpolation and state-of-the-art deep-learning-based solutions (STNet and CoordNet).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.