Abstract

Image reconstruction has received much attention and has advanced in recent years with the rise of deep learning. Deep neural models have been able to perform image-to-image translation by transferring pictorial styles, coloring old photographs or filling in missing parts. This last technique is known as image inpainting and enables restoration of damaged or missing parts of an image or photograph to obtain the complete picture. However, it is not always possible to properly define which parts are missing or to identify where they are missing, as in the case of superimposing new information on an already complete image. In this paper, we propose the use of generative adversarial networks (GANs), a well-known deep learning model, for virtual inpainting restoration of artificial landscape images containing archaeological remains of Greek temples. The network identifies key features determined by the internal logic of the architectural style denoted by the ruins and adds the missing architectural elements to obtain an image of the restored building. Unlike other studies, it does not receive any information on which elements should be added or where. Virtual inpainting restoration is capable of representing a building’s envelope but also integrates particular aspects of the building related to the architectural language used for its design. The restoration of the fundamental parts of the classical Greek order was consistent, and the results were evaluated with objective metrics and through a subjective survey between academics and architects. They showed that adding segmented images to the training dataset gives better results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call