Abstract

ABSTRACT Most deep-learning-based style transfers methods for video extraction use features from only a single style image to perform texture synthesis. However, this does not allow users to be creative or selective. Moreover, styles are applied to both foreground objects and backgrounds. This paper presents a painterly style transfer algorithm for video based on semantic segmentation that can segment the foreground and background, enabling different stylizations. First, a fully convolutional neural network was constructed for semantic segmentation, and a GrabCut method with a dynamic bounding box was used to correct segments and refine contours and edges. Second, an enhanced motion estimation method was applied between the foreground and background objects. Third, style transfer was used to extract textures from a style image and perform texture synthesis on a content image while preserving the architecture of the content image. The proposed method not only improves the motion boundaries of optical flow but also rectifies discontinuous and irregular segmentation due to occlusion and shape deformation. Finally, the proposed method was evaluated on various videos.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call