Abstract

Human gender plays an imperative role as a social construct and an essential form of an individual’s personality. This is highly reflected in social communication, forensic science, surveillance, and targeted marketing. The task has always been practiced in conjunction with face recognition, as it yields high performance compared to other features, e.g., gait and sound. This is attributed to its richness with diverse useful clues for gender inspection. However, facial features were always used separately without any integration between them, and a definitive solution has not yet been found yet. We propose a robust feature fusion method that integrates various facial features, in addition to a set of new features. Furthermore, Haar-like features are used to extract the essential features, while the used new features are, i.e., beard, moustache, cheeks, forehead, and face. The scientific novelty of the research is the robust fusion of features, to achieve high recognition accuracy. The proposed fusion method is a priority-based that arranges classifiers’ responses from the strongest to weakest according to their importance to yield a final decision. When the proposed technique compares the recent and standard benchmarks, it achieves high recognition accuracy. The experiments on various standard and challenging datasets widely adopted in the scientific community, namely LFW, DataHub, NIST, and Caltech WebFaces demonstrated a robust performance of 99.1% accuracy. Keywords: Gender Recognition, Standard Images, Facial Features DOI: https://doi.org/10.35741/issn.0258-2724.58.1.58

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call