Abstract

Acceleration techniques for Rendering in general, and Ray-Casting in particular, have been the subject of much research in Computer Graphics. Most efforts have been focused on new data structures for efficient ray/scene traversal and intersection. In this paper, we propose an acceleration technique that approximates rendering and that is built around a new feature-based clustering approach. The technique starts preprocessing the scene by grouping elements according to their features using a set of channels based on an information theory-based approach. Then, at run-time, a rendering strategy uses that clustering information to reconstruct the final image, by deciding which areas could take advantage of the coherence in the features and thus, could be interpolated; and which areas require more involved calculations. This process starts with a low-resolution render that is iteratively refined up to the desired resolution by reusing previously computed pixels. Our experimental results show a significant speedup of an order of magnitude, depending on the complexity of the per-pixel calculations, the screen size of the objects, and the number of clusters. Rendering quality and speed directly depend on the number of clusters and the number of steps performed during the reconstruction procedure, and both can easily be set by the user. Our findings show that feature-based clustering can significantly impact rendering speed if samples are chosen to enable interpolation of smooth regions. Our technique, thus, accelerates a range of popular and costly techniques, ranging from texture mapping up to complex ambient occlusion, soft and hard shadow calculations, and it can even be used in conjunction with more traditional acceleration methods.

Highlights

  • The problem of efficient image generation has been a cornerstone of research since the earliest days in Computer Graphics [1]

  • The technique we present in this paper is related to different areas in the Computer Graphics literature

  • We found that feature coherence at the low-variability regions in a scene, when projected in screen space, can be exploited to drastically accelerate Ray-casting-based rendering

Read more

Summary

Introduction

The problem of efficient image generation has been a cornerstone of research since the earliest days in Computer Graphics [1]. Ray Tracing is one of the most popular techniques when generality, quality and ease of implementation comes into account, being able to handle most optical effects [2]. It is logical that most efforts have been devoted to increase the speed of these calculations [3,4,5,6]. Besides tracing the rays themselves, complex shading operations (e.g., complex brdfs, sub-surface scattering, etc.) can be expensive to compute, considerably hindering rendering performance. Sometimes rough approximations, simplified calculations or other trade-offs are used to accelerate computations [7,8]. A promising avenue for optimization is to exploit different kinds of coherence inherent to the rendered scenes [9,10,11], but the complexity of these CPU-oriented approaches has precluded their use in modern hardware-based ray tracing approaches

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call