Photorealistic rendering is essential for immersive virtual reality and real-time ray tracing is required for this, yet the necessary computational load has so far hindered its use in scenarios where a fast response and high frame rates are paramount. To overcome this challenge, we present a deep learning-based model that optimises the computational load of ray tracing through utilising convolutional neural networks (CNNs) to predict lightscene interactions. We achieve this by training CNNs from datasets with offline ray-traced images. Using this learning process, we approximate the contents of light transport simulations from scene point sampling. The use of this optimised algorithm in virtual reality scenes, thus, enables renderings of more complex scenes, with dynamic lighting, complex geometry and interactive elements under high frame rates, keeping users immersed and responsive to events. Our results show that this method achieves visual similarities to traditional ray tracing, which is photorealistic rendering, but can be performed in real-time in virtual reality. We anticipate this work leads to many potential applications of ray tracing in many scenarios in virtual reality, such as gaming, architectural rendering and training.
Read full abstract