Abstract

AbstractWe present a kernel‐predicting neural denoising method for path‐traced deep‐Z images that facilitates their usage in animation and visual effects production. Deep‐Z images provide enhanced flexibility during compositing as they contain color, opacity, and other rendered data at multiple depth‐resolved bins within each pixel. However, they are subject to noise, and rendering until convergence is prohibitively expensive. The current state of the art in deep‐Z denoising yields objectionable artifacts, and current neural denoising methods are incapable of handling the variable number of depth bins in deep‐Z images. Our method extends kernel‐predicting convolutional neural networks to address the challenges stemming from denoising deep‐Z images. We propose a hybrid reconstruction architecture that combines the depth‐resolved reconstruction at each bin with the flattened reconstruction at the pixel level. Moreover, we propose depth‐aware neighbor indexing of the depth‐resolved inputs to the convolution and denoising kernel application operators, which reduces artifacts caused by depth misalignment present in deep‐Z images. We evaluate our method on a production‐quality deep‐Z dataset, demonstrating significant improvements in denoising quality and performance compared to the current state‐of‐the‐art deep‐Z denoiser. By addressing the significant challenge of the cost associated with rendering path‐traced deep‐Z images, we believe that our approach will pave the way for broader adoption of deep‐Z workflows in future productions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call