Abstract

Unsupervised domain adaptation (UDA) significantly reduces the gap between the source domain and the target domain in machine learning and computer vision tasks. Most UDA approaches are applied to images and videos, and only a few methods implement domain adaptation on 3-D computer vision problems. The existing UDA approaches operating on point clouds try to extract domain-invariant features in different domains for feature alignment. However, higher commonality brings less diversity and results a loss of detailed information. In this article, we propose a novel dual-branch feature alignment network (DFAN) architecture for domain adaptation on point cloud visual tasks to better exploit the respective characteristics of local and global features. Our approach specializes in the extraction and alignment of global and local features with different strategies in each branch to complement each other. We also introduce a hierarchical alignment strategy for local feature alignment and a distribution alignment strategy for global feature alignment. Experiments on the PointDA-10 and PointSegDA datasets show that our approach achieves state-of-the-art performance on the UDA of point cloud classification and segmentation tasks. The ablation study demonstrates the effectiveness of the dual-branch design and the feature alignment strategies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call