Abstract

Abstract Procedural textures are widely used in computer games and animations for efficiently rendering natural scenes. They are generated using mathematical functions, and users need to tune the model parameters to produce desired texture. However, unless one has a good knowledge of these procedural models, it is difficult to predict which model can produce what types of textures. This paper proposes a framework for generating new procedural textures from examples. The new texture can have the same perceptual attributes as those of the input example or re-defined by the users. To achieve this goal, we first introduce a PCA-based Convolutional Network (PCN) to effectively learn texture features. These PCN features can be used to accurately predict the perceptual scales of the input example and a procedural model that can generate the input. Perceptual scales of the input can be redefined by users and further mapped to a point in the perceptual texture space, which has been established in advance by using a training dataset. Finally, we determine the parameters of the procedural generation model by performing perceptual similarity measurement in the perceptual texture space. Extensive experiments show that our method has produced promising results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call