Abstract

Most variational formulations for structure-texture image decomposition force structure images to have small norm in some functional spaces, and share a common notion of edges, i.e., large-gradients or -intensity differences. However, such definition makes it difficult to distinguish structure edges from oscillations that have fine spatial scale but high contrast. In this paper, we introduce a new model by learning deep variational prior for structure images without explicit training data. An alternating direction method of multiplier (ADMM) algorithm and its modular structure are adopted to plug deep variational priors into an iterative smoothing process. The central observations are that convolution neural networks (CNNs) can replace the total variation prior, and are indeed powerful to capture the natures of structure and texture. We show that our learned priors using CNNs successfully differentiate highamplitude details from structure edges, and avoid halo artifacts. Different from previous data-driven smoothing schemes, our formulation provides another degree of freedom to produce continuous smoothing effects. Experimental results demonstrate the effectiveness of our approach on various computational photography and image processing applications, including texture removal, detail manipulation, HDR tone-mapping, and nonphotorealistic abstraction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.