Abstract

This paper is utilizing the joint mean and covariance discrepancy as a solution for domain adaption by using a deep Variational Auto Encoder (VAE). The proposed model is tested for the Scale Invariant Feature Transform (SIFT) of facial images. SIFT based face recognition relies heavily on the detection of facial features as the descriptor. The existence of sufficient and accurate scale invariant features can lead to better matching between model image and query image and result in robust recognition. Deep variational auto encoders have shown appreciable results in the extraction of hierarchical latent representation based domain adaptation in face recognition frameworks. Conventionally VAEs are trained to learn variance and mean of input distribution as the latent representation. Maximum mean discrepancy (MMD) uses second-order statistics in reproducing kernel Hilbert space as the representation trait. This research work proposes a novel VAE model to consider maximum mean covariance discrepancy (MMCD). While the VAE model looks for a discrepancy in the spread of variance around the mean value, MMCD measures directional discrepancy in the variance. To verify the efficacy of MMCD based VAE over conventional VAE, SIFT for face images are carried out. Quantitative evaluation of domain adaptation and use of covariance discrepancy are the two major contributions of this work. It is observed that MMCD VAE not only provides more number of SIFT feature but also better correspondent matching of feature points. Images from a created Bollywood database and the publicly available LFW database are used for comparative analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call