Point cloud anomaly detection is steadily emerging as a promising research area. Recognizing the importance of feature descriptiveness in this task, this study introduces the Complementary Pseudo Multimodal Feature (CPMF), which combines local geometrical information extracted by 3D handcrafted descriptors with global semantic information extracted from 2D pre-trained neural networks. Specifically, to leverage 2D pre-trained neural networks for point-wise feature extraction, this study projects original point clouds into multi-view images. These images are then fed into a pre-trained 2D neural network for informative 2D modality feature extraction. Following the 2D–3D correspondence, the multi-view 2D modality features are projected back to 3D space and aggregated to obtain point-wise 2D modality features. Finally, the point-wise 3D and 2D modality features are fused to derive the CPMF for point cloud anomaly detection. Extensive experiments conducted on MVTec 3D and Real3D datasets demonstrate the complementary capacity between 2D and 3D modality features and the effectiveness of CPMF. Notably, CPMF achieves a significantly higher object-level AUROC of 95.15% compared to other methods on the MVTec 3D benchmark. Code is available at https://github.com/caoyunkang/CPMF.