Abstract
Understanding and perceiving three-dimensional scientific visualizations, such as volume rendering, benefit from visual cues produced by the shading models. The conventional approaches are local shading models since they are computationally inexpensive and straightforward to implement. However, the local shading models do not always provide proper visual cues since non-local information is not sufficiently taken into account for the shading. Global illumination models achieve better visual cues, but they are often computationally expensive. It has been shown that alternative illumination models, such as ambient occlusion, multidirectional shading, and shadows, provide decent perceptual cues. Although these models improve upon local shading models, they still require expensive preprocessing, extra GPU memory, and a high computational cost, which cause a lack of interactivity during the transfer function manipulations and light position changes. In this paper, we proposed an approximate image-space multidirectional occlusion shading model for the volume rendering. Our model was computationally less expensive compared to the global illumination models and did not require preprocessing. Moreover, interactive transfer function manipulations and light position changes were achievable. Our model simulated a wide range of shading behaviors, such as ambient occlusion and soft and hard shadows, and can be effortlessly applied to existing rendering systems such as direct volume rendering. We showed that the suggested model enhanced the visual cues with modest computational costs.
Highlights
Direct volume rendering is a conventional technique to visualize volumetric datasets, such as measured medical data and scientific simulation data
Many simplified shading models from the global illumination models [1] have been developed for interactive rendering performance, such as ambient occlusion and shadow maps, which are volumetric approaches
Some of the approximations [2,3] interactively operate on-the-fly and do not require additional GPU memory. They are limited to a particular rendering technique such as slice-based direct volume rendering, and the additional computation is proportional to the size of the volume data
Summary
Direct volume rendering is a conventional technique to visualize volumetric datasets, such as measured medical data and scientific simulation data. Many simplified shading models from the global illumination models [1] have been developed for interactive rendering performance, such as ambient occlusion and shadow maps, which are volumetric approaches These techniques interactively provide perceptually better visual cues in the rendering. We present an approximate image-space multidirectional occlusion shading (ISMOS) model for the direct volume rendering. Since our image-space model does not require notable extra computation time, ISMOS enabled us to manipulate multiple light sources. An interactive approximate image-space multidirectional occlusion shading model applicable to the conventional direct volume-rendering pipeline without preprocessing and additional GPU memory; The simulation of light sources, including ambient occlusion and soft and hard shadows as a unified model; The handling of multiple directional lights with only a minor additional overhead;. Note that the transmittance approximation is executed in the first rendering pass on-the-fly. (b) The resulting image-space representation contains information on the distribution of the transmittance and the depth
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.