Alignface: Enhancing Face Verification Models Through Adaptive Alignment Of Pose, Expression, and Illumination

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

In the field of face recognition and verification, the practice of face frontalization is conventionally regarded as a standard technique. However, traditional frontalization methods often manipulate original facial images, relying on symmetric cues or data distributions from machine learning model training, which may lead to the distortion of genuine facial features. To tackle these challenges, this paper presents AlignFace, a novel face normalization algorithm specifically designed for preprocessing in the context of face verification. Distinct from existing methods, AlignFace uniquely aligns head pose, expression, and illumination conditions between image pairs. This is achieved by estimating these parameters in one image and reconstructing the other to correspond, all while meticulously preserving each image’s distinct identity features. Such an approach not only ensures a more authentic representation of facial characteristics but also maintains the integrity of real features in one of the images. Our extensive experimental evaluations, conducted on benchmark datasets such as LFW, CFP, AgeDB, and IJB-B, underscore the effectiveness of AlignFace. The comparative analysis with existing methods demonstrates its state-of-the-art performance, highlighting substantial advancements in face verification accuracy. For further research and replication, the code for our method is accessible at: https://github.com/SaharHusseini/ALIGNFACE.

Similar Papers
  • Conference Article
  • 10.2991/amcce-15.2015.8
A Method Based On Face Verification
  • Jan 1, 2015
  • Xuezhi Zhang + 4 more

Recent year, the method based on face verification has been one of the most vibrant study aspect in face verification field. The problem of face verification can be defined: imputing the picture or video, identifying or validating by using face database. video-video face verification means inputting video setting, verification by using face video data。It has most useful information, including: a lots of pictures of one person, continuity of face in time and space in video, three dimension face model etc. 1.PCA face verification Recent year, the method based on face verification has been one of the most vibrant study aspect in face verification field. The problem of face verification can be defined: imputing the picture or video, identifying or validating by using face database. video-video face verification means inputing video setting, verification by using face video data。It has most useful information, including: a lots of pictures of one person, continuity of face in time and space in video, three dimension face model etc. The way that face information be described in video in present literature can be summarized as follows: vector、matrix probability、 dynamic model 、fashion. daopting modes of probability and fashion need mass swatch which reflect distribution of face, can accurately describe face distribution. Dynamic model can better use time and space information , but the method is compicated, calculation is great. The describe way of imputing vector has a big defect, the randomicity of choose swatch.the way of matrix is comparatively simple, can apply the informatin of discontinuous pictures in time,but how to depict relationship between matrix is a worthful contect. Face character based on PCA (Principal Component Analysis, PCA)is the classic arithmetic in face verification, it is a face face veification and describe technology educe from bases analysis. Face character is a simple 、fast、 useful、 arithmetic based on transform coefficient character. Face verification: compare the given face portrait with the picture stored in computer, determine the face is whether the appointed guy, it is a one to one matching process, usually one person stored more face portrait in different angle in computer. A face verification system based on video including face detect model face portrait pretreatment model face character distill model and face verification model. 1.1PCA face verification Face verification: compare the given face portrait with the picture stored in computer, determine the face is whether the appointed guy, it is a one to one matching process, usually one person stored more face portrait in different angle in computer. A face verification system (3) based on video including face detect model face portrait pretreatment model face character distill model and face verification model,as picture 1.

  • Conference Article
  • Cite Count Icon 13
  • 10.1109/icb.2015.7139079
Fine-grained face verification: Dataset and baseline results
  • May 1, 2015
  • Junlin Hu + 2 more

This paper investigates the problem of fine-grained face verification under unconstrained conditions. For the conventional face verification task, the verification model is trained with some positive and negative face pairs, where each positive sample pair contains two face images of the same person while each negative sample pair usually consists of two face images from different subjects. However, in many real applications, facial appearance of the twins looks very similar even if they are considered as a negative pair in face verification. Therefore, it is important to differentiate a given face pair to determine whether it is from the same person or a twins for a practical face verification system because most existing face verification systems fails to work well in such a scenario. In this work, we define the problem as fine-grained face verification and collect an unconstrained face dataset which contains 455 pairs of identical twins to generate negative face pairs to evaluate several baseline verification models for fine-grained unconstrained face verification. Benchmark results on the unsupervised setting and restricted setting show the challenge of the fine-grained face verification in the wild.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/icccs52626.2021.9449167
Attention-based Efficient Lightweight Model for Accurate Real-Time Face Verification on Embedded Device
  • Apr 23, 2021
  • Dongmei Wei + 4 more

With the rapid development of face verification technology, the current face verification has reached a high accuracy rate. However, face verification requires large computing resources and complex model parameters, so it is difficult to be applied in real scenes. The face verification system based on embedded applications can well solve the problems of huge model storage space occupation and high computing resource consumption. The existing mainstream face verification models have reached high accuracy, such as VGG16Net. Although there have been studies trying to design lightweight neural networks, such as MobileFaceNet, etc., there are still shortcomings such as complex structure and high resource consumption, so it is only suitable for mobile devices and other embedded devices, and it is difficult to apply to low power consumption and low performance such as embedded systems. In the device. For embedded systems with limited memory and computing power, a more lightweight neural network model is needed. Therefore, this paper conducts two researches on the problems of model storage requiring a large amount of space and high computational resource consumption. First of all, a lightweight face verification network model MobileFaceNet-v3m based on the attention mechanism is designed to solve the problem of model storage space occupation. Our model has a 15.26% smaller space occupation than MobileFaceNet, which successfully reduces the resource consumption of the model. The accuracy rate can still be maintained at a high level-95.47% of face verification accuracy is achieved on LFW; secondly, We use the learning rate optimization method based on efficient warm restart to train the model, the accuracy of the model is improved faster, and the model is successfully transplanted to the embedded platform, which is another step forward to the real scene application in the future.

  • Conference Article
  • Cite Count Icon 3
  • 10.1109/ddcls52934.2021.9455715
Face Verification Technology Based on FaceNet Similarity Recognition Network
  • May 14, 2021
  • Fengwei Gu + 3 more

The background complexity of face images is high in actual scenes, and there are a series of problems such as illumination and occlusion, which greatly reduces the performance of the face verification model. This paper proposes a face verification algorithm FaceNetSRM based on the FaceNet similarity recognition network to improve the performance of the face verification model and the accuracy of Chinese face verification. Firstly, the deep convolutional neural network framework in FaceNet is determined, and the similarity recognition module is used to replace the Euclidean distance module in FaceNet. Then, the CASIA-WebFace face dataset and the self-made face dataset C-facev1 are used to train the face verification algorithm of this article. Finally, the trained model is tested and evaluated on the face dataset LFW and CASIA-FaceV5 to show the effectiveness of the face verification method in this article, and the face verification effect of the algorithm is compared with the face verification effect of FaceNet. The experimental results show that the face verification accuracy rate of the FaceNetSRM algorithm in this paper is 1.5% higher than that of FaceNet, and the accuracy rate of Chinese face verification is improved by 2.8%. The algorithm has good robustness and generalization ability, which can be applied in face verification systems.

  • Research Article
  • Cite Count Icon 57
  • 10.1145/3469288
XCos: An Explainable Cosine Metric for Face Verification Task
  • Oct 31, 2021
  • ACM Transactions on Multimedia Computing, Communications, and Applications
  • Yu-Sheng Lin + 5 more

We study the XAI (explainable AI) on the face recognition task, particularly the face verification. Face verification has become a crucial task in recent days and it has been deployed to plenty of applications, such as access control, surveillance, and automatic personal log-on for mobile devices. With the increasing amount of data, deep convolutional neural networks can achieve very high accuracy for the face verification task. Beyond exceptional performances, deep face verification models need more interpretability so that we can trust the results they generate. In this article, we propose a novel similarity metric, called explainable cosine ( xCos ), that comes with a learnable module that can be plugged into most of the verification models to provide meaningful explanations. With the help of xCos , we can see which parts of the two input faces are similar, where the model pays its attention to, and how the local similarities are weighted to form the output xCos score. We demonstrate the effectiveness of our proposed method on LFW and various competitive benchmarks, not only resulting in providing novel and desirable model interpretability for face verification but also ensuring the accuracy as plugging into existing face recognition models.

  • Conference Article
  • Cite Count Icon 8
  • 10.1145/3591106.3592289
FaceLivePlus: A Unified System for Face Liveness Detection and Face Verification
  • Jun 12, 2023
  • Ying Zhang + 5 more

Face verification is a trending way to verify someone's identity in broad applications. But such systems are vulnerable to face spoofing attacks via, for example, a fraudulent copy of a photo, making it necessary to include face liveness detection as an additional safeguard. Among most existing studies, the face liveness detection is realized in a separate machine learning model in addition to the model for face verification. Such a two-model configuration may face challenges when deployed onto platforms with limited computation power and storage (e.g. mobile phone, IoT devices), especially considering each model may have millions of parameters. Inspired by the fact that humans can verify a person's identity and liveness at a single glance from a face, we develop a novel system, named FaceLivePlus, to learn a single and universal face descriptor for the two tasks (face verification and liveness detection) so that the computational workload and storage space can be halved. To achieve this, we formulate the underlying relationship between the two tasks, and seamlessly embed this relationship in a distance ranking deep model. The model directly works on features rather than classification labels, which makes the system well generalized on unseen data. Extensive experiments show that our average half total error rate (HTER) has at least 15% and 8% improvement from the state-of-the-arts on two benchmark datasets. We anticipate this approach could become a new direction for face authentication.

  • Research Article
  • Cite Count Icon 32
  • 10.1016/j.patcog.2017.01.011
A weakly supervised method for makeup-invariant face verification
  • Jan 10, 2017
  • Pattern Recognition
  • Yao Sun + 5 more

A weakly supervised method for makeup-invariant face verification

  • Conference Article
  • Cite Count Icon 316
  • 10.1109/iccv.2013.188
Hybrid Deep Learning for Face Verification
  • Dec 1, 2013
  • Yi Sun + 2 more

This paper proposes a hybrid convolutional network (ConvNet)-Restricted Boltzmann Machine (RBM) model for face verification in wild conditions. A key contribution of this work is to directly learn relational visual features, which indicate identity similarities, from raw pixels of face pairs with a hybrid deep network. The deep ConvNets in our model mimic the primary visual cortex to jointly extract local relational visual features from two face images compared with the learned filter pairs. These relational features are further processed through multiple layers to extract high-level and global features. Multiple groups of ConvNets are constructed in order to achieve robustness and characterize face similarities from different aspects. The top-layer RBM performs inference from complementary high-level features extracted from different ConvNet groups with a two-level average pooling hierarchy. The entire hybrid deep network is jointly fine-tuned to optimize for the task of face verification. Our model achieves competitive face verification performance on the LFW dataset.

  • Research Article
  • Cite Count Icon 3
  • 10.1007/s11760-011-0246-4
Client-specific A-stack model for adult face verification across aging
  • Aug 18, 2011
  • Signal, Image and Video Processing
  • Andrzej Drygajlo + 1 more

The problem of time validity of biometric models has received only a marginal attention from researchers. In this paper, we propose to manage the aging influence on the adult face verification system by an A-stack age modeling technique, which uses the age as a class-independent metadata quality measure together with scores from a single or multiple baseline classifiers, in order to obtain better face verification performance. This allows for improved long-term class separation by introducing a dynamically changing decision boundary across the age progression in the scores-age space using a short-term enrollment model. This new method, based on the concept of classifier stacking and age-aware decision boundary, compares favorably with the conventional face verification approach, which uses age-independent decision threshold calculated only in the score space at the time of enrollment. Our experiments on the YouTube and MORPH data show that the use of the proposed approach allows for improving the identification accuracy as opposed to the baseline classifier.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/icpr.2010.326
Multi-classifier Q-stack Aging Model for Adult Face Verification
  • Aug 1, 2010
  • Weifeng Li + 1 more

The influence of age progression on the performance of multi-classifier face verification systems is a challenging and largely open research problem that deserves more and more attention. In this paper, we propose to manage the aging influence on the adult face verification system by a multi-classifier Q-stack age modeling technique, which uses the age as a class-independent metadata quality measure together with scores from baseline classifiers, combining global and local patterns, in order to obtain better recognition rates. This allows for improved long-term class separation by introducing a 2D parameterized decision boundary in the scores-age space using a short-term enrollment model. This new method, based on the concept of classifier stacking and age-dependent decision boundary, compares favorably with the conventional face verification approach, which uses age-independent decision threshold calculated only in the score space at the time of enrollment. The proposed approach is evaluated on the MORPH database.

  • Research Article
  • Cite Count Icon 96
  • 10.1109/tpami.2015.2505293
Hybrid Deep Learning for Face Verification.
  • Dec 3, 2015
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Yi Sun + 2 more

This paper proposes a hybrid convolutional network (ConvNet)-Restricted Boltzmann Machine (RBM) model for face verification. A key contribution of this work is to learn high-level relational visual features with rich identity similarity information. The deep ConvNets in our model start by extracting local relational visual features from two face images in comparison, which are further processed through multiple layers to extract high-level and global relational features. To keep enough discriminative information, we use the last hidden layer neuron activations of the ConvNet as features for face verification instead of those of the output layer. To characterize face similarities from different aspects, we concatenate the features extracted from different face region pairs by different deep ConvNets. The resulting high-dimensional relational features are classified by an RBM for face verification. After pre-training each ConvNet and the RBM separately, the entire hybrid network is jointly optimized to further improve the accuracy. Various aspects of the ConvNet structures, relational features, and face verification classifiers are investigated. Our model achieves the state-of-the-art face verification performance on the challenging LFW dataset under both the unrestricted protocol and the setting when outside data is allowed to be used for training.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.1155/2014/769101
A Method Based on Active Appearance Model and Gradient Orientation Pyramid of Face Verification as People Age
  • Jan 1, 2014
  • Mathematical Problems in Engineering
  • Ji-Xiang Du + 2 more

Face verification in the presence of age progression is an important problem that has not been widely addressed. In this paper, we propose to use the active appearance model (AAM) and gradient orientation pyramid (GOP) feature representation for this problem. First, we use the AAM on the dataset and generate the AAM images; we then get the representation of gradient orientation on a hierarchical model, which is the appearance of GOP. When combined with a support vector machine (SVM), experimental results show that our approach has excellent performance on two public domain face aging datasets: FGNET and MORPH. Second, we compare the performance of the proposed methods with a number of related face verification methods; the results show that the new approach is more robust and performs better.

  • Research Article
  • Cite Count Icon 25
  • 10.1016/j.procs.2019.12.070
Identity authentication on mobile devices using face verification and ID image recognition
  • Jan 1, 2019
  • Procedia Computer Science
  • Xing Wu + 5 more

Identity authentication on mobile devices using face verification and ID image recognition

  • Research Article
  • Cite Count Icon 8
  • 10.1049/iet-bmt.2012.0024
Total variability modelling for face verification
  • Dec 1, 2012
  • IET Biometrics
  • R Wallace + 1 more

This study presents the first detailed study of total variability modelling (TVM) for face verification. TVM was originally proposed for speaker verification, where it has been accepted as state-of-the-art technology. Also referred to as front-end factor analysis, TVM uses a probabilistic model to represent a speech recording as a low-dimensional vector known as an ` i -vector'. This representation has been successfully applied to a wide variety of speech-related pattern recognition applications, and remains a hot topic in biometrics. In this work, the authors extend the application of i -vectors beyond the domain of speech to a novel representation of facial images for the purpose of face verification. Extensive experimentation on several challenging and publicly available face recognition databases demonstrates that TVM generalises well to this modality, providing between 17 and 39% relative reduction in verification error rate compared to a baseline Gaussian mixture model system. Several i -vector session compensation and scoring techniques were evaluated including source-normalised linear discriminant analysis (SN-LDA), probabilistic LDA and within-class covariance normalisation. Finally, this study provides a detailed comparison of the complexity of TVM, highlighting some important computational advantages with respect to related state-of-the-art techniques.

  • Conference Article
  • Cite Count Icon 8
  • 10.1109/tsp55681.2022.9851255
A Novel BlazeFace Based Pre-processing for MobileFaceNet in Face Verification
  • Jul 13, 2022
  • Necmettin Bayar + 2 more

Face verification is an important security step on mobile devices and many other systems, thus it has to work with high accuracy. Besides the importance of the accuracy in the face verification model, its weight and computational complexity play crucial roles especially in mobile devices. In this study, we aimed to provide novel contribution as pre-processing step for MobileFaceNet without affecting its accuracy. With this contri-bution, overall pipeline has smaller weight and faster inference time by comparison to the available pre-processing models for MobileFaceNet such as multi task cascaded convolutional neural network (MTCNN) and RetinaFace. The face verification test results show the superiority of our proposed model compared to the state-of-the-art models in terms of weight and speed.

Save Icon
Up Arrow
Open/Close