Abstract

Face recognition is one of themost active research areas in computer vision, statistical analysis, pattern recognition and machine learning (Huq et al., 2007). Significant progress has been made in the last decade, in particular after the FRVT 2002 (Phillips et al., 2003). For example (O’Toole et al., 2007) showed that face recognition systems surpassed human performance in recognizing faces under different illumination conditions. In spite of recent progress the problem of detecting and recognizing faces in un-controlled biometric environments is still largely unsolved. The use of other biometric techniques, such as fingerprinting and iris technology appear to be more accurate and popular from a commercial point of view than face recognition (Abate et al., 2007). This is due to the inherent problems with 2D-image based FR systems. These include the viewing point of the face, illumination and variations in facial expression. These problems exhibit a great challenge for such systems and significantly affect performance and accuracy of algorithms. In an overview of the Face Recognition Grand Challenge (FRGC) Phillips et al. (2006), the authors pointed out some of the new techniques used in Face Recognition that essentially hold the potential to improve performance of automatic face recognition significantly over the results in FRVT 2002. Among these techniques the use of 3D information to improve the recognition rates and overcome the inherent problems of 2D image based face recognition has become a current research trend. In this chapter we present a novel technique for 3D face recognition using a set of parameters representing the central region of the face. These parameters are essentially vertical and cross sectional profiles and are extracted automatically without any prior knowledge or assumption about the image pose or orientation. In addition, these profiles are stored in terms of their Fourier Coefficients in order to minimize the size of the input data. The algorithm accuracy is validated and verified against two different datasets of 3D images covers a sufficient variety of expression and pose variation. Our computational framework is based on concepts of computational geometry which yield fast and accurate results. Here, our first goal is to automatically allocate the symmetry profile along the face. This is undertaken by means of computing the intersection between the symmetry plane and the facial mesh, results in a planner curve that accurately represents the symmetry profile. Once the symmetry profile and few features points are allocated, then it is used to align the scanned images within the Cartesian coordinates with the tip of the nose residing at the origin. 5

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call