• framework using dual Gaussian distributions to model face embeddings with uncertainty. • Learn a sample-specific weight adaptively to integrate sub-Gaussian features into a compact representation. • Propose to minimize weighted Euclidean distance and the entropy of adaptive weight to constrain the two distributions. • Perform comprehensive experiments and analyses to illustrate the effect of our method for challenging face recognition. Most currently existing face recognition methods model face images as deterministic points in the latent space. However, it could encounter performance drop inevitably in a fully unconstrained scenario for the intrinsic noise of the images. In order to mitigate the detrimental impact of noisy data on model training, distribution estimation is introduced to face recognition, which models each face image as a Gaussian distribution and improves the robustness against noise effectively. But the uncertainty (variance) learned only relates to one attribute: the quality of the image. We propose dual Gaussian modeling (DGM) for deep face embeddings. For an input image, the network learns two Gaussian distributions simultaneously. The main Gaussian branch focuses on learning easy samples in the training dataset, while the other one mainly deals with faces with large pose. The uncertainty is not only correlated with image quality, but also with facial pose. During training, a sample-specific adaptive weight is learned to integrate the two sub-Gaussian features into a more compact discriminant embedding for classification . Besides, we introduce weighted Euclidean distance and minimize the entropy of the adaptive weight to regulate the relationship between the two distributions. Comprehensive experiments and analysis demonstrate that our method can boost the performance of face recognition under common or more unconstrained benchmarks, such as IJB-C.
Read full abstract