Contemporary computer gaming affords players the agency to manually tailor rendering settings, a capability crucial for optimizing computational demands following their hardware performance. Specifically, adjustments to texture resolution, shadow map intricacies, and anti-aliasing complexities facilitate seamless animation generation, particularly on systems equipped with budget-friendly graphical units. Nonetheless, the intricacy and extensive interdependencies among these rendering parameters render selecting an optimal configuration a multifaceted challenge. Our approach involves training a proprietary convolutional neural network (CNN) to streamline this intricate process. This CNN quantitatively compares reference images rendered at peak quality to their counterparts subjected to quality reduction. Its function involves identifying and classifying artifacts within the altered images, and evaluating their perceptibility to human observers. Our neural network undergoes rigorous training using an expansive dataset derived from prevalent game engine scenes. A meticulous process to establish accurate classification data involves manually annotating image regions afflicted by diminished quality. As a testament to our methodology, we implement a prototype forward renderer based on OpenGL. Employing the trained neural network within this application enables the evaluation of image quality across diverse anti-aliasing settings, culminating in identifying settings minimizing artifact visibility. Empirical validation via user studies substantiates our network's efficacy, demonstrating its superior discernment of artifact visibility compared to established image quality metrics.
Read full abstract