Synthetic aperture radar (SAR) 3D point clouds reconstruction can eliminate the problems of layover in 2D SAR image projections, the recognition of the reconstructed point cloud can significantly enhance target identification and information extraction. Current SAR 3D point cloud segmentation methods, based on traditional machine learning clustering approaches, suffer from poor automation and accuracy. However, the absence of publicly labeled SAR datasets impedes the progress of deep learning-based SAR 3D point cloud segmentation methods. To tackle the aforementioned challenges, we introduce, for the first time, an alternative training approach for SAR 3D point cloud segmentation using LiDAR annotated data, which offers a more abundant sample pool, thus alleviating the lack of training sets. Nevertheless, a segmentation model trained on LiDAR point clouds exhibits a significant decline in performance when directly applied to SAR 3D reconstructed point clouds due to the cross-domain discrepancy. This research presents a pioneering domain adaptation 3D semantic segmentation framework to implement cross-modal learning for SAR point clouds. In our scheme, a simple yet effective technique is developed to achieve segmentation of SAR double-bounce scattering regions called SARDBS-Mix, which employs a mixing strategy, to capture the distinctive reflection characteristics of SAR data. Furthermore, we implement approaches including center alignment and normalization, local augmentation, and weighted cross entropy to mitigate LiDAR and SAR domain gap and class imbalances. The experimental results validate the feasibility and effectiveness of the proposed method for SAR 3D reconstructed point cloud segmentation.
Read full abstract