Abstract

Image segmentation can reveal the semantic structure information in an image, which is helpful guidance information for image inpainting. Notably, it can help mitigate the artifacts on the boundaries of different semantic regions during the inpainting process. Existing semantic guidance-based image inpainting provides one-way guidance from the semantic segmentation task to the image inpainting task. There is no feedback from the inpainting results to adjust the guidance process, which causes inferior performance. To tackle this issue, this work proposes mutual dual-task generators to establish the interaction between image segmentation and image inpainting tasks. Thus, semantic segmentation guides image inpainting and also receives feedback from image inpainting. These two processes interact with each other and progressively improve the inpainting quality. The mutual dual-task generator consists of a shared encoder and mutual decoders with the bidirectional Cross-domain Feature DeNormalization (CFDN) module inside, which hierarchically models the Segmentation-guided image Texture (ST) generation and Texture-guided semantic Segmentation (TS) generation. At the end of mutual decoders, an Adaptive Attention Fusion (AAF) module is proposed to augment the texture and semantic class affinity between pixels, further refining the inpainted results. Experimental results demonstrate that the proposed mutual dual-task generator pipeline achieves superior inpainting performances over the state of the arts on three public datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call