In this paper, we present a non-rigid point cloud matching method based on an invariant structure for face deformation. Our work is guided by the realistic needs of 3D face reconstruction and re-topology, which critically need support for calculating the correspondence between deformable models. Our paper makes three main contributions: First, we propose an approach to normalize the global structure features of expressive faces using texture space properties, which decreases the variation magnitude of facial landmarks. Second, we make a modification to the traditional shape context descriptor to solve the problem of regional cross-mismatch. Third, we collect a dataset with various expressions. Ablation studies and comparative experiments were conducted to investigate the performance of the above work. In face deformable cases, our method achieved 99.89% accuracy on our homemade face dataset, showing superior performance over some other popular algorithms. In this way, it can help modelers to build digital humans more easily based on the estimated correspondence of facial landmarks, saving a lot of manpower and time.
Read full abstract