Abstract

Image-based 3D model retrieval aims at organizing unlabeled 3D models according to the relevance to the labeled 2D images. With easy accessibility of 2D images and wide applications of 3D models, image-based 3D model retrieval attracts more and more attentions. However, it is still a challenging problem due to the modality gap between 2D images and 3D models. In spite of the remarkable progress brought by domain adaptation techniques for this research topic, which usually propose to align the global distribution statistics of two domains, these methods are limited in learning discriminative features for target samples due to the lack of label information in target domain. In this article, besides utilizing the label information of 2D image domain and the adversarial domain alignment, we additionally incorporate self-supervision to address cross-domain 3D model retrieval problem. Specifically, we simultaneously optimize the adversarial adaptation for both domains based on visual features and the contrastive learning for unlabeled 3D model domain to help the feature extractor to learn discriminative feature representations. The contrastive learning is used to map view representations of the identical model nearby while view representations of different models far apart. To guarantee adequate and high-quality negative samples for contrastive learning, we design a memory bank to store and update representative view for each 3D model based on entropy minimization principle. Comprehensive experimental results on the public image-based 3D model retrieval datasets, i.e., MI3DOR and MI3DOR-2, have demonstrated the effectiveness of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.