Abstract

Neural style transfer has attracted wide attention for the surprising quality of synthetic images. However, the visual quality of existing methods usually suffers from style transfer noise (STN). Unlike the real-world noises contaminated in natural image capturing, the STN results from the stylized image synthesizing process. This difference leads to two major challenges for stylized image denoising: (1) Synthesized stylized images have no corresponding ground-truth images, which makes it challenging to obtain noisy-clean sample pairs for training denoising networks. (2) STNs are usually color noises or accompanied by color distortions, so color correction should be considered in denoising. To tackle the first challenge, we propose a novel strategy called noise style transfer to produce noisy-clean sample pairs. Specifically, the noise style transfer takes noises as special style textures and transfers them from the noisy stylized image onto the clean style image. To tackle the second challenge, we design a Quasi Siamese denoising network, which takes the noisy sample and its high saturation (HS) version as inputs of two network branches in training. Through mutual supervision between the two branches, the “photo developer” effect of HS images on color noise is conducive to simultaneous noise removal and color correction. Extensive experimental results on various stylized images substantially demonstrate the superior performance of our approach. With great adaptiveness to different style transfer methods, our approach successfully removes distinctive STNs while keeping the artistic texture and achieves a further reduction in style loss of up to 57.7%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call