Abstract

Deep convolutional neural networks (DCNN) have demonstrated their potential to generate reasonable results in image inpainting. Some existing method uses convolution to generate surrounding features, then passes features by fully connected layers, and finally predicts missing regions. Although the final result is semantically reasonable, some blurred situations generated because the standard convolution is used, which conditioned on the effective pixels and the substitute values in the masked holes. In this paper, we introduce dense blocks for the U-Net architecture, which can alleviate the problem of gradient disappearance, while also reducing the number of parameters. The most important is that it can enhance the transfer of features and make more efficient use of them. Partial convolution is used to solve the problem of artifacts such as color differences and blurring. Experiments on the place365 dataset demonstrate our approach can generate more detailed and semantically reasonable results in random area image inpainting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call