AbstractWe propose the first approach that can generate procedural three‐dimensional (3D) hair involving braids modeled from a single‐view photograph. Existing single‐view image‐based hair modeling methods fail to handle braided hairstyles. Our approach combines image processing, deep neural networks, as well as two‐dimensional (2D) and 3D geometric algorithms. In order to train our neural network, we create a braid unit data set. Our recognition and segmentation system can successfully segment hair regions, braid and non‐braid regions, using convolutional neural networks. We further process the images to obtain the locations, sizes, and orientations of the braid units. Given these braid units, we perform braid structure analysis to obtain the braid strand curves. The procedural modeling of the 3D braids is represented using 3D helical curves where the parameters are extracted from the 2D image analysis. Furthermore, we extract 2D hair strands from the non‐braid region using the Gabor filter and orientation maps. Then, a 3D hair volume is generated with the hair region silhouette information. We project the 2D hair strands and braids on the 3D hair volume to obtain the 3D hair strands and 3D braids. The strands for the braid and non‐braid regions are used as guides to generate dense hair strands. Dense strands are emitted from the hair root triangle mesh and follow the guide strands. With a sparse set of landmarks, the hair region of the photograph is texture mapped to the 3D hair root mesh and used to color the strands. We successfully tested our approach on photographs showing variations of braid styles and hair color.
Read full abstract