Abstract
In this paper, our goal is to perform a virtual restoration of an ancient coin from its image. The present work is the first one to propose this problem, and it is motivated by two key promising applications. The first of these emerges from the recently recognised dependence of automatic image based coin type matching on the condition of the imaged coins; the algorithm introduced herein could be used as a pre-processing step, aimed at overcoming the aforementioned weakness. The second application concerns the utility both to professional and hobby numismatists of being able to visualise and study an ancient coin in a state closer to its original (minted) appearance. To address the conceptual problem at hand, we introduce a framework which comprises a deep learning based method using Generative Adversarial Networks, capable of learning the range of appearance variation of different semantic elements artistically depicted on coins, and a complementary algorithm used to collect, correctly label, and prepare for processing a large numbers of images (here 100,000) of ancient coins needed to facilitate the training of the aforementioned learning method. Empirical evaluation performed on a withheld subset of the data demonstrates extremely promising performance of the proposed methodology and shows that our algorithm correctly learns the spectra of appearance variation across different semantic elements, and despite the enormous variability present reconstructs the missing (damaged) detail while matching the surrounding semantic content and artistic style.
Highlights
The aim of the work described in the present paper is to generate a realistic looking synthetic image of an ancient coin prior to suffering damage through wear and tear, from an image of an actual coin in a damaged state
This is a novel challenge in the realm of computer vision and machine learning based analysis of ancient coins, introduced for the first time
Challenges emerging from the numismatics community first attracted the attention of computer vision and machine learning specialists some decade and a half ago, and since this interest has grown at an increasing pace, spurring an entire new sub-field of research
Summary
The aim of the work described in the present paper is to generate a realistic looking synthetic image of an ancient coin prior to suffering damage through wear and tear, from an image of an actual coin in a damaged state This is a novel challenge in the realm of computer vision and machine learning based analysis of ancient coins, introduced for the first time. It is motivated by the value it adds in two key application domains. The first of these concerns hobby numismatists who would benefit from seeing what their coins looked like originally, as well as from being more readily able to identify them—a formidable task for non-experts. Being able to create synthetic images of less damaged coins could be of great benefit as a pre-processing step for automatic methods for coin analysis, such as identification [1] or semantic description [2], as the performance of such methods has been shown to be significantly affected by the state of preservation of the imaged coins [3]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have