Abstract

In the last few years, extensive effort has been spent to develop better performed 3-D object retrieval methods. View-based methods have attracted a significant amount of attention, not only because their state-of-art performance, but also they merely require some of a 3-D object's 2-D view images. However, most recent approaches only deal with the images' primordial-extracted features and ignore their hidden relationships. Considering these latent characters, a visual-topic-model 3-D object retrieval approach is introduced in this paper. In this framework, dense scale invariant feature transform(dense-SIFT) descriptors are extracted from a set of views of each 3-D object, and all the dense-SIFT descriptors are grouped into bag-of-word features using k-means clustering. Then, the topic distribution of a 3-D object is generated via latent dirichlet allocation (LDA) given its bag-of-word features. Gibbs sampling is applied in the learning and inference processing of LDA. We conduct experiments on the Princeton Shape Benchmark (PSB) and National Taiwan University 3D model database (NTU), and the experimental results demonstrate that the proposed method can achieve better retrieval effectiveness than the state-of-the-art methods under several standard evaluation measures.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.