Abstract

We present a generative model with spatial control to synthesize dual-artistic media effects. It generates different artistic media effects on the foreground and background of an image. In order to apply a distinct artistic media effect to a photograph, deep learning-based models require a training dataset composed of pairs of a photograph and its corresponding artwork images. To build the dataset, we apply some existing techniques that generate an artwork image including colored pencil, watercolor and abstraction from a photograph. In order to produce a dual artistic effect, we apply a semantic segmentation technique to separate the foreground and background of a photograph. Our model applies different artistic media effects on the foreground and background using space control module such as SPADE block.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call