Abstract Generative Adversarial Network based image inpainting algorithms often make errors when filling arbitrary masked areas because all input pixels are treated as effective pixels during convolutional operations. To resolve this matter, we present a novel solution: an image inpainting algorithm that utilizes gated convolutions within the residual blocks of the network. By incorporating gated convolutions instead of traditional convolutions, our algorithm effectively learns and captures the relationship between the known regions and the masked regions. The algorithm utilizes a two-stage generative adversarial restoration network, where the structure and texture restoration are performed sequentially. Specifically, the structural information of the known region in the damaged image is detected using an edge detection algorithm. Subsequently, the edges of the masked area are combined with the color and texture information of the known region for structure restoration. Finally, the complete structure and the image to be restored are fed into the texture restoration network for texture restoration, yielding the complete image output. During network training, a spectral normalization Markovian discriminator is employed to address the slow weight changes during iteration, thereby increasing convergence speed and model accuracy. Based on the Places2 dataset, our experimental findings indicate that our algorithm surpasses existing two-stage restoration algorithms in terms of improving peak signal-to-noise ratio and structural similarity. Specifically, our proposed algorithm achieves a 4.3% enhancement in peak signal-to-noise ratio and a 3.7% improvement in structural similarity when restoring images with various shapes and sizes of damaged areas. Additionally, it produces noticeable visual enhancements, further validating its effectiveness.
Read full abstract