Abstract

In this work, we propose a new two-view domain adaptation network named Deep-Shallow Domain Adaptation Network (DSDAN) for 3D point cloud recognition. Different from the traditional 2D image recognition task, the valuable texture information is often absent in point cloud data, making point cloud recognition a challenging task, especially in the cross-dataset scenario where the training and testing data exhibit a considerable distribution mismatch. In our DSDAN method, we tackle the challenging cross-dataset 3D point cloud recognition task from two aspects. On one hand, we propose a two-view learning framework, such that we can effectively leverage multiple feature representations to improve the recognition performance. To this end, we propose a simple and efficient Bag-of-Points feature method, as a complementary view to the deep representation. Moreover, we also propose a cross view consistency loss to boost the two-view learning framework. On the other hand, we further propose a two-level adaptation strategy to effectively address the domain distribution mismatch issue. Specifically, we apply a feature-level distribution alignment module for each view, and also propose an instance-level adaptation approach to select highly confident pseudo-labeled target samples for adapting the model to the target domain, based on which a co-training scheme is used to integrate the learning and adaptation process on the two views. Extensive experiments on the benchmark dataset show that our newly proposed DSDAN method outperforms the existing state-of-the-art methods for the cross-dataset point cloud recognition task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call