Abstract

Automatically selecting a set of representative views of a 3D virtual cultural relic is crucial for constructing wisdom museums. There is no consensus regarding the definition of a good view in computer graphics; the same is true of multiple views. View-based methods play an important role in the field of 3D shape retrieval and classification. However, it is still difficult to select views that not only conform to subjective human preferences but also have a good feature description. In this study, we define two novel measures based on information entropy, named depth variation entropy and depth distribution entropy. These measures were used to determine the amount of information about the depth swings and different depth quantities of each view. Firstly, a canonical pose 3D cultural relic was generated using principal component analysis. A set of depth maps obtained by orthographic cameras was then captured on the dense vertices of a geodesic unit-sphere by subdividing the regular unit-octahedron. Afterwards, the two measures were calculated separately on the depth maps gained from the vertices and the results on each one-eighth sphere form a group. The views with maximum entropy of depth variation and depth distribution were selected, and further scattered viewpoints were selected. Finally, the threshold word histogram derived from the vector quantization of salient local descriptors on the selected depth maps represented the 3D cultural relic. The viewpoints obtained by the proposed method coincided with an arbitrary pose of the 3D model. The latter eliminated the steps of manually adjusting the model’s pose and provided acceptable display views for people. In addition, it was verified on several datasets that the proposed method, which uses the Bag-of-Words mechanism and a deep convolution neural network, also has good performance regarding retrieval and classification when dealing with only four views.

Highlights

  • High-precision 3D cultural relics allow researchers and viewers to observe surface morphology and local features from an arbitrary angle

  • The 3D shape retrieval results were analyzed via our method, which used grouping word histograms, on the McGill Shape Benchmark (MSB), which consists of 255 objects classified into 10 categories [10] and the test set of the Princeton Shape Benchmark (PSB) containing 907 models are classified into 92 categories [11]

  • Nearest neighbor (1-NN), first-tier (1-Tier), second-tier (2-Tier) and discounted cumulative gain (DCG) were used to compare our method with the approaches of CM-BOF [28], light field descriptor (LFD) [23], radialized spherical extent function(REXT) [43], spherical harmonic descriptor(SHD) [43], gaussian euclidean distance transform(GEDT) [44], viewpoint information I2 [21] and D2 shape distribution (D2) [45] on the PSB test set with base classification

Read more

Summary

Introduction

High-precision 3D cultural relics allow researchers and viewers to observe surface morphology and local features from an arbitrary angle. Identifying a cultural relic relies on marking more detailed information from the different views, which can be used to expand the cultural relic’s knowledge base [3] This mean that we must be able to automatically obtain several views basically covering the surface of the 3D object and include significant features on the high-precision 3D model of the cultural relic. It is still difficult to obtain a few views with large shape differences, such as the views containing the front as well as the side of the object To solve these problems, two new measures based on information entropy have been defined. We present the best views selected by four different algorithms and show the results obtained by the proposed multi-view selection method.

Related Work
Selection of the Best View of 3D Objects
View-Based 3D Model Retrieval and Classification
Multi-View Selection
Threshold Word Histogram Method for Representative Analysis
Experiment Results and Analysis
Evaluation of the Threshold Word Histogram Method
Applicability of a Small Number of Views
Classification Using a Small Number of Views Based on Deep Learning
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call