Abstract

This paper proposes a method for biometric-driven facial image recognition, based on multivariate correlation analysis, which extracts geometrical feature points and low-level visual features. The low-level visual features, such as colour and texture, are extracted at local from the selected prominent regions of the facial image. The geometrical features are captured using the Active Shape Model (ASM). The colour features are extracted from the YCbCr colour model, and the autocorrelation method is deployed to extract the texture features. The extracted features are formed as a feature tensor matrix. The feature matrix of the key face image is compared to the feature matrices of the target face image that are stored in the feature vector database using the Canonical Correlation method. The correlation between the key and target feature matrices is tested, whether it is highly significant or not. If it is significantly correlated, then it is inferred that the key and target face images are the same; otherwise, it is concluded that they are different. The benchmark facial image datasets, GT, LFW, and Pointing ’04 datasets, considered for the experiments; in addition to the datasets, a facial image database has been constructed with celebrities on our interest which has also been subjected to experiments. The proposed method resulted in a mean precision score (mP@α) of 95.27%, 94.20%, 96.19%, and 96.05 forGT, LFW, Pointing ‘04, and our datasets, respectively. Also, the F-score was calculated that are, 96.78%, 95.15%, 97.08%, and 96.96%forGT, LFW, Pointing ‘04, and our datasets, respectively. The results obtained by the proposed method are comparable to the existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call