Abstract

Existing image completion methods are mostly based on missing regions that are small or located in the middle of the images. When regions to be completed are large or near the edge of the images, due to the lack of context information, the completion results tend to be blurred or distorted, and there will be a large blank area in the final results. In addition, the unstable training of the generative adversarial network is also prone to cause pseudo-color in the completion results. Aiming at the two above-mentioned problems, a method of image completion with large or edge-missing areas is proposed; also, the network structures have been improved. On the one hand, it overcomes the problem of lacking context information, which thereby ensures the reality of generated texture details; on the other hand, it suppresses the generation of pseudo-color, which guarantees the consistency of the whole image both in vision and content. The experimental results show that the proposed method achieves better completion results in completing large or edge-missing areas.

Highlights

  • Image completion technology is designed to synthesize the missing or damaged areas of an image and is a fundamental problem in low-level vision

  • To solve solve the problems of completing large missing areas or regions located at the border of the image, and and overcome overcomethe theproblem problemofofunstable unstabletraining trainingofof adversarial network, this paper proposes adversarial network, this paper proposes an an image completion method or edge-missing areas, and improvements makes improvements of the image completion method with with large large or edge-missing areas, and makes of the network network used inmethod

  • (1) On the one hand, by using the central block of the complemented region as the input of the local discriminator 2 added in this paper, the synthesis results are more realistic because the training process is back-propagated in the loss function of the central region and the corresponding region of the real image; (2) On the other hand, the training of the network structure used in Iizuka’s method is unstable and difficult to converge

Read more

Summary

Introduction

Image completion technology is designed to synthesize the missing or damaged areas of an image and is a fundamental problem in low-level vision. The main method of image completion was to copy the existing image block in the uncorrupted area to the missing area This method can achieve effective results only when the image to be complemented has a strong structure, the texture information such as the color of each region has strong similarity, and the missing region has a regular shape [12,13,14,15,16]. Ofof thethe preliminary complemented areaarea andand inputting it into intolocal the discriminator local discriminator for adversarial training overcomes the problems of the method, existing the for adversarial training overcomes the problems of the existing method, such as ambiguity and distortion when the missing areas are large. Unit layer) with the combination of batch normalization layer (BN layer) and Leaky_ReLU layer, layer) layer,are so that completion results so thatand the Leaky_ReLU completion results morethe realistic and the edgesare aremore morerealistic fused. and the edges are more fused

Related Work
Method
16. The inputarea of theiscompletion network
Results
14 PEER REVIEW
The red line denotes resultsand and greenonline denotes
The resultscomparison comparisonof of Iizuka’s
Conclusions
1.References
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.