Abstract

Implicit neural networks have demonstrated immense potential in compressing volume data for visualization. However, despite their advantages, the high costs of training and inference have thus far limited their application to offline data processing and non-interactive rendering. In this paper, we present a novel solution that leverages modern GPU tensor cores, a well-implemented CUDA machine learning framework, an optimized global-illumination-capable volume rendering algorithm, and a suitable acceleration data structure to enable real-time direct ray tracing of volumetric neural representations. Our approach produces high-fidelity neural representations with a peak signal-to-noise ratio (PSNR) exceeding 30dB, while reducing their size by up to three orders of magnitude. Remarkably, we show that the entire training step can fit within a rendering loop, bypassing the need for pre-training. Additionally, we introduce an efficient out-of-core training strategy to support extreme-scale volume data, making it possible for our volumetric neural representation training to scale up to terascale on a workstation with an NVIDIA RTX 3090 GPU. Our method significantly outperforms state-of-the-art techniques in terms of training time, reconstruction quality, and rendering performance, making it an ideal choice for applications where fast and accurate visualization of large-scale volume data is paramount.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call