Abstract

Artistic video stylization, which is widely used in multimedia entertainment, transforms a given video into different artistic styles. Most of the existing video stylization algorithms can simulate single or limited video artistic styles. Although some algorithms can achieve multi-style video processing, these algorithms are complex and difficult to implement. To solve this problem, we propose a multi-styled video stylization algorithm based on texture advection, where different artistic styles are synthesized and transferred from user-specified texture samples of desired styles. We use the direction field-guided texture synthesis to compute the texture layer that represents the artistic style. Painterly directional video styles are simulated competently by the orientation changes in the synthesized anisotropic textures. There appeared local distorted region of the texture layer during texture advection under the optical flow field. To address this issue, we propose the texture inpaint to synthesize the limited distorted region and make the stylized video temporally coherent. We also accelerate the video stylization by using the CUDA parallel computing framework that parallelly computes the morphological operations used for video abstraction. Finally, we produce stylized videos of multiple artistic styles with satisfactory experimental results, including the styles of oil painting, watercolor painting and stylized lines drawing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call