This paper presents a method for face image normalization that can be applied to the extraction of illumination invariant facial features or used to remove bad lighting effects and produce high-quality, photorealistic results. Most of the existing approaches concentrate on separating the constant albedo from the variable light intensity; that concept, however, is based on the Lambertian model, which fails in the presence of specularities and cast shadows. Therefore, to tackle this problem, various methods use bootstrap sets to learn the reflectance model of a given person or a face in general. Unfortunately, algorithms of this type are usually not practical for real applications. The proposed approach does not require any training procedure, as the normalization is performed only on the basis of information that is contained in the image that is being processed. Still, external knowledge that is represented by the deformable shape model is employed to execute the localization of the important points, such as eye centers. Additionally, an assumption regarding the symmetry of the face is utilized. The rest of the normalization algorithm relies mostly on simple image processing techniques. Experiments on the CMU-PIE and Extended Yale B databases show that the new method not only provides satisfactory recognition results but is also able to generate natural-looking images.
Read full abstract