Abstract

Facial recognition has attracted more and more attention since the rapid growth of artificial intelligence (AI) techniques in recent years. However, most of the related works about facial reconstruction and recognition are mainly based on big data collection and image deep learning related algorithms. The data driven based AI approaches inevitably increase the computational complexity of CPU and usually highly count on GPU capacity. One of the typical issues of RGB-based facial recognition is its applicability in low light or dark environments. To solve this problem, this paper presents an effective procedure for facial reconstruction as well as facial recognition via using a depth sensor. For each testing candidate, the depth camera acquires a multi-view of its 3D point clouds. The point cloud sets are stitched for 3D model reconstruction by using the iterative closest point (ICP). Then, a segmentation procedure is designed to separate the model set into a body part and head part. Based on the segmented 3D face point clouds, certain facial features are then extracted for recognition scoring. Taking a single shot from the depth sensor, the point cloud data is going to register with other 3D face models to determine which is the best candidate the data belongs to. By using the proposed feature-based 3D facial similarity score algorithm, which composes of normal, curvature, and registration similarities between different point clouds, the person can be labeled correctly even in a dark environment. The proposed method is suitable for smart devices such as smart phones and smart pads with tiny depth camera equipped. Experiments with real-world data show that the proposed method is able to reconstruct denser models and achieve point cloud-based 3D face recognition.

Highlights

  • In the past, the technology and algorithm of hardware has not been well developed

  • We present a complete procedure from the very front data acquisition to the face point cloud reconstruction, and apply the untrained 3D point cloud data to facial recognition

  • Regarding the point set as rigid body, iterative closest point (ICP) minimizes the Euclidean distance between the two point clouds through iterations

Read more

Summary

Introduction

The technology and algorithm of hardware has not been well developed. Related work about facial reconstruction and recognition are mainly based on 2D image processing. A 3D point cloud is composed of several points in a three-dimensional space which represents the shape and surface of an object Features such as curvature and normal are included. The main contributions of this work include: (a) ROI confinement techniques for precise head segmentation by K-means clustering; (b) computing efficiently and alignment skills for multi-view point clouds registration by ICP; (c) simple method to detect outliers by DBSCAN and face feature extraction; and (d) a novel face registration similarity score is presented to evaluate 3D face point cloud recognition. The projected data in lower dimensions still preserve the information which is held by the original data The features such as normal and curvature can be obtained by calculating the eigenvalue and eigenvector of covariance matrix in the point set.

Illustration of kD-tree construction nearest point search from kD-tree:
K-Means Clustering
Illustration
Head Segmentation
Section 3.1
Cloud shows that some losing information is compensated byby
Section 3.3.
44 Cloud shows that features face been
Point Cloud Denoising in
Point Cloud Similarity
Normal Similarity Score
Curvature Similarity Score
Registration Similarity Score
Face Similarity Score
Experiment Verification
Experiment
20. Schematic illustration of model point cloud
24. Result
Findings
25.Result
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call