Abstract
In the direct volume rendering (DVR), it often takes a long time for a novice to manipulate the transfer function (TF) and analyze the volume data. To lessen the difficulty in volume rendering, several researchers have developed deep learning techniques. However, the existing techniques are not easy to apply directly to existing DVR pipelines. In this study, we propose an image-based TF colorization with CNN to automatically generate a direct volume rendering image (DVRI) similar to a target image. Our system includes CNN model training, TF labeling, image-based TF generation, and volume rendering by matching the target image. We introduce a technique for training CNN and labeling the TF with images similar to the input volume dataset. Moreover, we extract the primary colors from the target image according to the labels classified with the CNN model. We render the volume data with the TF to produce the DVRI reproducing the prominent colors in the target image.
Highlights
Direct volume rendering (DVR) is a technique to visualize 3D volume data by projecting it into a 2D plane
To acquire direct volume-rendered images (DVRIs) from discrete volume data, a camera is placed in a space where 3D volume data exists, and sets of color and opacity are assigned to all voxels
We propose a technique to acquire DVRIs using clustered transfer function (TF) in the attribute space, which is inspired by the study of Maciejewski et al [3] and classify the obtained DVRIs with Convolution Neural Network (CNN)
Summary
Direct volume rendering (DVR) is a technique to visualize 3D volume data by projecting it into a 2D plane. The renderer visualizes the 3D volume data with the camera and voxel information by applying direct volume rendering techniques such as ray casting In this process, a transfer function (TF) is manipulated to determine the optical characteristics of voxels within the volume renderer. Several volume rendering techniques employ the deep learning techniques, including volume segmentation [4], [5], viewpoint estimation [6], transfer function [7], lighting [8], and quality improvement [9] These studies utilize Convolution Neural Network (CNN) and Generative Adversarial Net (GAN).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.