Abstract

We propose a method for simulating cloth with meshes dynamically refined according to visual saliency. It is a common belief that it is preferable for the regions of an image being viewed to have more details than others. For a certain scene, a low-resolution cloth mesh is first simulated and rendered into images in the preview stage. Pixel saliency values of these images are predicted according to a pre-trained saliency prediction model. These pixel saliencies are then translated to a vertex saliency of the corresponding meshes. Vertex saliency, together with camera positions and a number of geometric features of surfaces, guides the dynamic remeshing for simulation in the production stage. To build the saliency prediction model, images extracted from various videos of clothing scenes were used as training data. Participants were asked to watch these videos and their eye motion was tracked. A saliency map is generated from the eye motion data for each extracted video frame image. Image feature vectors and map labels are sent to a Support Vector Machine for training to obtain a saliency prediction model. Our method greatly reduces the number of vertices and faces in the clothing model, and generates a speed-up of more than 3 × for scenes with single dressed character, while for multi-character scenes the speed-up is increased to more than 5×. The proposed technique can work together with view-dependency for offline simulation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call