Abstract

Recently, context learning networks have shown promise in filling large holes in natural images. These networks can decorate the predicted contents with high-frequency details by borrowing or copying neural information from the known region. However, this operation might introduce undesired content change in the synthesized region, especially when similar neural patterns cannot be found in the known region. To solve this problem, we present a network named Artist-Net to decompose an image into the content code and style code explicitly. The Artist-Net completes a corrupted image following the way an artist restores a damaged picture. It can produce more detailed content by inferring the content code of the corrupted images in the latent space since the dimension of the content space is lower than the original image. The Artist-Net can also keep style consistent over the entire image by decorating the inferred content code with the style code extracted from the known region. The experiments on multiple datasets, including structural and natural images demonstrate that the proposed network out-performs the existing ones in terms of content accuracy as well as texture details.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call