Abstract

Taking advantage of labeled auxiliary training data whose distribution is similar to the distribution of the gallery, single sample face recognition (SSFR) has achieved encouraging performance. However, in many real-world applications, it is difficult to collect such an auxiliary training dataset, while it may be easier to collect an unlabeled target training dataset whose distribution is similar to the distribution of the gallery and a labeled source training dataset whose distribution may be different to the distribution of the gallery. How can these three datasets be effectively leveraged to handle SSFR? To address this issue, this paper proposes a new method of Gallery-Sensitive Single Sample Face Recognition based on Domain Adaptation (GS-DA). First, GS-DA employs the method of TSD (targetize the source domain) to construct a common subspace and a targetized source domain. Secondly, it projects each gallery image into the common subspace and obtains the sparse representation of each gallery image in the common subspace. Thirdly, it reconstructs each gallery image from the targetized source domain to estimate the within-class scatter matrix and the between-class scatter matrix of the gallery. Lastly, it learns a discriminant model by maximizing the sum of the traces of the between-class scatter matrix of the gallery and the between-class scatter matrix of the targetized source domain as well as minimizing the sum of the traces of the total scatter matrix of the gallery and the total scatter matrix of the target training data. The experimental results on five datasets illustrate the superiority of GS-DA in leveraging these three datasets for SSFR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call