Abstract

The automatic estimation of gender and face expression is an important task in many applications. In this context, we believe that an accurate segmentation of the human face could provide a good information about these mid-level features, due to the strong interaction existing between facial parts and these features. According to this idea, in this paper we present a gender and face expression estimator, based on a semantic segmentation of the human face into six parts. The proposed algorithm works in different steps. Firstly, a database consisting of face images was manually labeled for training a discriminative model. Then, three kinds of features, namely, location, shape and color have been extracted from uniformly sampled square patches. By using a trained model, facial images have then been segmented into six semantic classes: hair, skin, nose, eyes, mouth, and back-ground, using a Random Decision Forest classifier (RDF). In the final step, a linear Support Vector Machine (SVM) classifier was trained for each considered mid-level feature (i.e., gender and expression) by using the corresponding probability maps. The performance of the proposed algorithm was evaluated on different faces databases, namely FEI and FERET. The simulation results show that the proposed algorithm outperforms the state of the art competitively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call