Abstract
The most recent face recognition systems are mainly dependent on feature representations obtained using either local handcrafted-descriptors, such as local binary patterns (LBP), or use a deep learning approach, such as deep belief network (DBN). However, the former usually suffers from the wide variations in face images, while the latter usually discards the local facial features, which are proven to be important for face recognition. In this paper, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the DBN is proposed to address the face recognition problem in unconstrained conditions. Firstly, a novel multimodal local feature extraction approach based on merging the advantages of the Curvelet transform with Fractal dimension is proposed and termed the Curvelet–Fractal approach. The main motivation of this approach is that the Curvelet transform, a new anisotropic and multidirectional transform, can efficiently represent the main structure of the face (e.g., edges and curves), while the Fractal dimension is one of the most powerful texture descriptors for face images. Secondly, a novel framework is proposed, termed the multimodal deep face recognition (MDFR) framework, to add feature representations by training a DBN on top of the local feature representations instead of the pixel intensity representations. We demonstrate that representations acquired by the proposed MDFR framework are complementary to those acquired by the Curvelet–Fractal approach. Finally, the performance of the proposed approaches has been evaluated by conducting a number of extensive experiments on four large-scale face datasets: the SDUMLA-HMT, FERET, CAS-PEAL-R1, and LFW databases. The results obtained from the proposed approaches outperform other state-of-the-art of approaches (e.g., LBP, DBN, WPCA) by achieving new state-of-the-art results on all the employed datasets.
Highlights
In the recent years, there has been a growing interest in highly secured and well-designed face recognition systems, due to their potentially wide applications in many sensitive places such as controlling access to physical as well as virtual places in both commercial and military associations, including ATM cash dispensers, e-learning, information security, intelligent surveillance, and other daily human applications [1]
The results obtained from the proposed approaches outperform other state-of-the-art of approaches (e.g., local binary patterns (LBP), deep belief network (DBN), WPCA) by achieving new state-of-the-art results on all the employed datasets
In spite of the significant improvement in the performance of face recognition over previous decades, it still a challenging task for the research community, especially when face images are taken in unconstrained conditions due to the large intra-personal variations such as changes in facial expression, pose, illumination, aging, and the small interpersonal differences
Summary
There has been a growing interest in highly secured and well-designed face recognition systems, due to their potentially wide applications in many sensitive places such as controlling access to physical as well as virtual places in both commercial and military associations, including ATM cash dispensers, e-learning, information security, intelligent surveillance, and other daily human applications [1]. Task of extracting and learning useful and highly discriminating facial features in order to minimize intra-personal variations and maximize interpersonal differences is complicated In this regard, a number of approaches have been proposed, implemented, and refined to address all these drawbacks and problems in the face recognition system. To improve the generalization ability and reduce the computational complexity of the DBN, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the DBN is proposed to address the face recognition problem in unconstrained conditions. Conclusions and future research directions are stated in the last section
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have