Abstract

We investigate into a nonlinear mapping by which multi-view face patterns in the input space are mapped into invariant points in a low dimensional feature space. The invariance to both illumination and view is achieved in two-stages. First, a nonlinear mapping from the input space to a low dimensional feature space is learned from multi-view face examples to achieve illumination invariance. The illumination invariant feature points of face patterns across views are on a curve parameterized by the view parameter, and the view parameter of a face pattern can be estimated from the location of the feature point on the curve by using least squares fit. Then the second nonlinear mapping, which is from the illumination invariant feature space to another feature space of the same dimension, is performed to achieve invariance to both illumination and view. This amounts to do a normalization based on the view estimate. By the two stage nonlinear mapping, multi-view face patterns are mapped to a zero mean Gaussian distribution in the latter feature space. Properties of the nonlinear mappings and the Gaussian face distribution are explored and supported by experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call