Abstract

Embracing generative models to the inpainting realm has resulted in its overwhelming acceptability as an image editing technique. However, these generative inpainting techniques tend to be cumbersome in view of the large memory footprint and resources consumed during the operation. This accounts for why images above 2k resolution are far from being considered as input for inpainting operations. In the present era of super-resolution, where human eyes are accustomed to viewing images beyond 6k and 8k, inpainting algorithms were not at par with the resolution benchmark. This paper proposes a high-resolution inpainting algorithm that addresses these concerns. To improve the computational overhead, the proposed architecture follows a hierarchically scaled 2-stage generative inpainting network. The proposed framework showcases a novel approach of injecting a blind Super Resolution Generative Adversarial Network network in an inpainting scenario. The resolution lost in the first phase is regained by the embedded pre-trained Real-ESRGAN model employed in the final phase. Qualitative and quantitative evaluation of the proposed method was performed using various datasets. Results obtained on comparing with the state-of-the-art inpainting methods were promising and ought to be a big leap towards super-resolution inpainting methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call