Abstract

Patch-based approaches are used in state-of-the-art methods for image inpainting. This paper presents a new method for exemplar-based image inpainting using transformed patches. The transformation is determined for each patch in a fully automatic way from a surrounding texture content. We build upon a recent affine invariant patch similarity measure that performs an appropriate patch comparison by automatically adapting the size and shape of the patches. As a consequence, it intrinsically extends the set of available source patches to copy information from. We incorporate this measure into a variational formulation for inpainting and present a numerical algorithm for optimizing it. We show that our method can be applied to complete a perspectively distorted texture as well as to automatically inpaint one view of a scene using other view of the same scene as a source. We present experimental results both for gray and color images, and a comparison with some exemplar-based image inpainting methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.