Abstract

Depth image based 3D model retrieval faces challenges of occlusion, noise, and view variability present in depth images. In this work, we study a new supervised deep autoencoder for depth image-based 3D model retrieval. We investigate both supervised and unsupervised approaches to bring synthetic depth images rendered from 3D models and real depth images in the same feature space. We show that providing appropriate supervision in back propagation of the autoencoder can help the retrieval performance. The key novelty is the new objective function where supervised classification information is combined with the reconstruction error for joint optimization. It is interesting to manifest that, unlike any other pairwise model structures, crossdomain retrieval is still possible using only one single deep network in our model. We have evaluated the effectiveness of our model on NYUD2 depth image dataset and Model-Net10 models for ten indoor object categories. We have rendered 95 different views for each 3D model and found that training rendered and real depth images together is an effective way to bridge the gap between 3D models and depth data. The proposed supervised method outperforms the recent pairwise approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.