Abstract

Viewpoint estimation from 2D rendered images is helpful in understanding how users select viewpoints for volume visualization and guiding users to select better viewpoints based on previous visualizations. In this article, we propose a viewpoint estimation method based on Convolutional Neural Networks (CNNs) for volume visualization. We first design an overfit-resistant image rendering pipeline to generate the training images with accurate viewpoint annotations, and then train a category-specific viewpoint classification network to estimate the viewpoint for the given rendered image. Our method can achieve good performance on images rendered with different transfer functions and rendering parameters in several categories. We apply our model to recover the viewpoints of the rendered images in publications, and show how experts look at volumes. We also introduce a CNN feature-based image similarity measure for similarity voting based viewpoint selection, which can suggest semantically meaningful optimal viewpoints for different volumes and transfer functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call