Abstract

Real-world paintings are made, by artists, using brush strokes as the rendering primitive to depict semantic content. The bulk of the Neural Style Transfer (NST) is known transferring style using texture patches, not strokes. The output looks like the content image, but some are traced over using the style texture: it does not look painterly. We adopt a very different approach that uses strokes. Our contribution is to analyse paintings to learn stroke families-that is, distributions of strokes based on their shape (a dot, straight lines, curved arcs,etc.). When synthesising a new output, these distributions are sampled to ensure the output is painted with the correct style of stroke. Consequently, our output looks more "painterly" than NST output based on texture. Furthermore, where strokes are placed is an important contributing factor in determining output quality, and we have also addressed this aspect. Humans place strokes to emphasize salient semantically meaningful image content. Conventional NST uses a content loss premised on filter responses that is agnostic to salience. We show that replacing that loss with one based on the language-image model benefits the output through greater emphasis of salient content.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call