Since the last decade, the area of recognizing gender of a person from an image of his/her face has been playing an important role in the research field. A automatic gender recognition is an important concept, essential for many fields like forensic science and automatic payment system. However, it is very onerous due to high variability factors such as illumination, expression, pose, age, scales, camera quality and occlusion. Humans can easily recognize the difference between genders, but it is a critical task for computer. To overcome this issue, many experimental results have been explained in the existing literature as per the advancement of machine vision. But, still definite optimal solution could not be found. For practical usage, a novel full approach to gender classification which is mainly based on image intensity variation, shape and texture features is proposed in this work. These multi-attribute features are mixed at different spatial scales or levels. The proposed novel system uses two datasets such as Facial ExpressIon Set (FEI) dataset and self-built dataset with various facial expressions. In this research, eight local directional pattern algorithms are used for extracting facial edge feature. Local binary pattern is also used for extracting texture feature, whereas intensity as a added feature. Finally, spatial histograms computed from the above features are concatenated to build a gender descriptor. The proposed descriptor efficiently extracts discriminating information from three different levels, including regional, global and directional level. After the extraction of a gender descriptor, effective linear kernel-based support vector machine superior to other classifiers is used to classify the face image as either male or female. The experimental results show that the classification accuracy obtained with the mixture of outcome of multi-scale, multi-block, distinct and prime feature classification is better than having a single-scaled image. It is worth mentioning that the proposed approach is implemented in MATLAB which achieves an accuracy of 99% on the FEI face dataset (200 faces) and 94% on self-built dataset (200 faces).
Read full abstract