This study reports an effective and robust edge-based scheme for the reconstruction of 3D human faces from input of single images, addressing drawbacks of existing methods in case of large face pose angles or noisy input images. Accurate 3D face reconstruction from 2D images is important, as it can enable a wide range of applications, such as face recognition, animations, games and AR/VR systems. Edge features extracted from 2D images contain wealthy and robust 3D geometric information, which were used together with landmarks for face reconstruction purpose. However, the accurate reconstruction of 3D faces from contour features is a challenging task, since traditional edge or contour detection algorithms introduce a great deal of noise, which would adversely affect the reconstruction. This paper reports on the use of a hard-blended face contour feature from a neural network and a Canny edge extractor for face reconstruction. The quantitative results indicate that our method achieves a notable improvement in face reconstruction with a Euclidean distance error of 1.64 mm and a normal vector distance error of 1.27 mm when compared to the ground truth, outperforming both traditional and other deep learning-based methods. These metrics show particularly significant advancements, especially in face shape reconstruction under large pose angles. The method also achieved higher accuracy and robustness on in-the-wild images under conditions of blurring, makeup, occlusion and poor illumination.
Read full abstract