Abstract
An effective method to enable the enjoyment of works of art by the blind is to reproduce tactile copies of the work, to facilitate tactile exploration. This is even more important when it comes to paintings, which are inherently not accessible to the blind unless they are transformed into 3D models. Today, artificial intelligence techniques are rapidly growing and represent a paramount method for solving a variety of previously hard-to-solve tasks. It is, therefore, presumable that the translation from 2D images to 3D models using such methods will be also in continuous development. Unfortunately, reconstructing a 3D model from a single image, especially when it comes to painting-based images, is an ill-posed problem due to the depth ambiguity and the lack of a ground truth for the 3D model. To confront this issue, this paper proposes an overview of artificial intelligence-based methods for reconstructing 3D geometry from a single image is provided. The survey explores the potentiality of Convolutional Neural Networks, Generative Adversarial Networks, Variational Autoencoders, and zero-shot methods. Through a small set of case studies, the capabilities and limitations of CNNs in creating a 3D-scene model from artworks are also encompassed. The findings suggest that, while deep learning models demonstrate that they are effective for 3D retrieval from paintings, they also call for post-processing and user interaction to improve the accuracy of the 3D models.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have