Handling and interpreting sparse 3D point clouds, especially from mmWave radar, presents unique challenges due to the inherent data sparsity and the vast domain difference compared to denser point clouds like those from LiDAR. In this paper, we introduce a novel cascaded generative adversarial network (GAN) approach to bridge this domain gap. The core principle is to progressively refine the radar-based point cloud through a series of GANs, each targeting a higher resolution. By leveraging multi-level features and a hybrid loss function that combines adversarial, geometric, and consistency components, our method ensures a smooth transition from the sparse radar representation to a high-resolution LiDAR-like point cloud. Our cascaded approach operates at a patch level, and the integrated loss function ensures that the generated points not only resemble the target domain but also maintain geometric and structural fidelity. Real-life dataset consisting mostly of moving pedestrians were collected using a system made of Radar, LiDAR, and RGB Camera. Through an extensive experiment on the collected real-world pedestrian dataset, we validate the efficacy of our approach. Inference from the network indicates that our method can upsample mmWave radar point clouds with enhanced density, uniformity, and closer alignment to the ground truth LiDAR point clouds, which is the first of its kind network to do so.
Read full abstract