Abstract

We propose a generalized 3D shape descriptor for the efficient classification of 3D archaeological artifacts. Our descriptor is based on a multi-view approach of curvature features, consisting of the following steps: pose normalization of 3D models, local curvature descriptor calculation, construction of 3D shape descriptor using the multi-view approach and curvature maps, and dimensionality reduction by random projections. We generate two descriptors from two different paradigms: 1) handcrafted, wherein the descriptor is manually designed for object feature extraction, and directly passed on to the classifier and 2) machine learnt, in which the descriptor automatically learns the object features through a pretrained deep neural network model (VGG-16) for transfer learning and passed on to the classifier. These descriptors are applied to two different archaeological datasets: 1) non-public Mexican dataset, represented by a collection of 963 3D archaeological objects from the Templo Mayor Museum in Mexico City that includes anthropomorphic sculptures, figurines, masks, ceramic vessels, and musical instruments; and 2) 3D pottery content-based retrieval benchmark dataset, consisting of 411 objects. Once the multi-view descriptors are obtained, we evaluate their effectiveness by using the following object classification schemes: $K$ -nearest neighbor, support vector machine, and structured support vector machine. Our object descriptors classification results are compared against five popular 3D descriptors in the literature, namely, rotation invariant spherical harmonic, histogram of spherical orientations, signature of histograms of orientations, symmetry descriptor, and reflective symmetry descriptor. Experimentally, we were able to verify that our machine learnt and handcrafted descriptors offer the best classification accuracy (20% better on average than comparative descriptors), independently of the classification methods. Our proposed descriptors are able to capture sufficient information to discern among different classes, concluding that it adequately characterizes the datasets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.