Abstract

We propose to perform image-based ancient coin classification by recognizing symbols minted on the reverse side of coins. Dense sampling based bag-of-visual-words model is used for symbol recognition. The lack of spatial information in the bag-of-visual-words model degrades symbol recognition rate as the symbols have specific geometric structures. Furthermore, coins can be imaged under various rotations resulting in severely rotated symbols. Therefore we propose a novel bag-of-visual-wordsmodel for symbol-based coin classification which accounts for the spatial arrangement of the visual words in a rotation invariant manner. We perform our experiments on images collected from three different sources thus making our dataset more challenging. To evaluate our proposed model for robustness to rotations, we synthetically generated severely rotated coin images. In the presence of rotation differences between coins, our model outperforms the conventional bag-of-visual-words model as well as recently proposed angles histograms of pair-wise identical visual words model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call