Abstract

It has been previously demonstrated that systems based on block wise local features and Gaussian mixture models (GMM) are suitable for video based talking face verification due to the best trade-off in terms of complexity, robustness and performance. In this paper, we propose two methods to enhance the robustness and performance of the GMM-ZTnorm baseline system. First, joint factor analysis is performed to compensate the session variabilities due to different recording devices, lighting conditions, facial expressions, etc. Second, the difference between the universal background model (UBM) and the maximum a posteriori (MAP) adapted model is mapped into the GMM mean shifted supervector whose over-complete dictionary becomes more incoherent. Then, for verification purpose, the sparse representation computed by l1-minimization with quadratic constraints is employed to model these GMM mean shifted supervectors. Experimental results show that the proposed system achieved 8.4% (group 1) and 10.5% (group 2) equal error rate on the Banca talking face video database following the P protocol and outperformed the GMM-ZTnorm baseline by yielding more than 20% relative error reduction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call