Abstract

Probabilistic linear discriminant analysis (PLDA) is a very effective feature extraction approach and has obtained extensive and successful applications in supervised learning tasks. It employs the squared L2 -norm to measure the model errors, which assumes a Gaussian noise distribution implicitly. However, the noise in real-life applications may not follow a Gaussian distribution. Particularly, the squared L2 -norm could extremely exaggerate data outliers. To address this issue, this article proposes a robust PLDA model under the assumption of a Laplacian noise distribution, called L1-PLDA. The learning process employs the approach by expressing the Laplacian density function as a superposition of an infinite number of Gaussian distributions via introducing a new latent variable and then adopts the variational expectation-maximization (EM) algorithm to learn parameters. The most significant advantage of the new model is that the introduced latent variable can be used to detect data outliers. The experiments on several public databases show the superiority of the proposed L1-PLDA model in terms of classification and outlier detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call