Abstract

Deep learning techniques have recently made considerable progress in image inpainting by introducing prior knowledge, e.g. texture and structure. However, the existing methods still suffer from artefacts such as distorted texture and abrupt colors due to insufficient consideration of correlation between the prior visual features. In this paper, we propose a novel progressive and multi-prior-guided network (PMPN) for image inpainting inspired by the human painting process, which first constructs the sketch of painting art, then generates the corresponding textures finally fills the colors to the appropriate locations. In particular, to model the global multi-scale contexts during the reconstruction process, we design a bi-directional cross-stage perception module that captures spatial information across branches and stages and guides the model to synthesize a natural and consistent texture. Our proposed PMPN network is evaluated on three publicly available datasets, outperforming the current state-of-the-art models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call