Calculating global illumination in computer graphics is difficult, especially for complex scenes. This is due to the interreflections of the rays of light and the interactions with the materials of the objects composing the scene. Solutions based on ambient light approximations have been implemented. However, these are computationally intensive and produce images with less precision, as these solutions ignore the ambient lighting component by adopting coarse approximations. In this paper, we propose a method capable of approximating the global illumination effect. Our idea is to compute the global illumination by adding three images (direct illumination, environmental light, and ambient occlusion). Direct illumination is calculated by a reference method. Environmental illumination is computed using an adversarial neural network from a single 2D image. Ambient occlusion is generated using conditional adversarial neural networks with an attention mechanism to pay more attention to the relevant image features during the training step. We use two image masks to keep the object’s position in the screen space, which allows efficient reconstruction of the final result. Our solution produces quality images compared to reference images and does not require any computation in the 3D scene or screen space.
Read full abstract