Abstract

3D object retrieval is a hot research field in computer vision and multimedia analysis domain. Since the appearance feature and points of view of 3D objects are very different, thus, the distribution of the training set and test set are variant which is very suitable for transfer learning or cross-domain learning. In the transfer learning or cross-domain learning, the feature extraction is very important which should have good robust for different domains. Thus, in this work, we pay attention to the feature extraction of 3D objects. So far, different feature representations and object retrieval approaches have been proposed. Among them, view-based deep learning retrieval methods achieve state-of-the-art performance, but the existing deep learning retrieval methods only simply use a deep neural network to extract features from each view and directly obtain the view-level shape descriptors without utilizing the spatial relationship between the views. In order to mine the spatial relationship among different views and obtain more discriminative 3D shape descriptors, in this work, 3D object retrieval based on non-local graph neural networks (NGNN) is proposed. In detail, the residual network is firstly utilized as the infrastructure, and then the non-local structure is embedded in the resnet to learn the intrinsic relationship between the views. Finally, the view pooling layer is employed to further fuse the information from different views, and obtain the discriminate feature for the 3D object. Experimental results on two public MVRED and NTU 3D datasets show that the non-local graph network is very efficient for exploring the latent relationship among different views, and the performance of NGNN significantly outperforms state-of-the-art approaches whose improvement can reaches 12.4%-22.7% on ANMRR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call