Abstract

View-based 3D object retrieval, in which multiple views are used for representation and retrieval, has attracted increasing attention due to its great flexibility. In this paper, we propose a discriminative multi-view latent variable model (MVLVM) for this task. Specifically, we design MVLVM to have an undirected graph structure in which the view set of a given 3D object is treated as the observations from which to discover the latent visual and spatial contexts. Then, we detail the learning and inference process of MVLVM for view-based 3D object retrieval. The proposed MVLVM has the following beneficial features: 1) it jointly learns visual and spatial contexts for 3D object modelling and 2) it avoids the difficulty of representative view extraction for model representation. Consequently, it can support flexible 3D model retrieval for real applications by avoiding camera array constraints, which severely constrain traditional methods. We report extensive experiments conducted on single-modal datasets (the NTU and ITI datasets) and a multi-modal dataset (MVRED-RGB and MVRED-Depth). These comparative experiments demonstrate the superiority of the proposed method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.