Abstract

Appearance-based gender classification is one of the key areas in pedestrian analysis, and it has many useful applications such as visual surveillance, predict demographics statistics, population prediction, and human–computer interaction. For pedestrian gender classification, traditional and deep convolutional neural network (CNN) approaches are employed individually. However, they are facing issues, for instance, discriminative feature representations, lower classification accuracy, and small sample size for model learning. To address these issues, this article proposes a framework that considers the combination of both traditional and deep CNN approaches for gender classification. To realize it, HOG- and LOMO-assisted low-level features are extracted to handle rotation, viewpoint and illumination variances in the images. Simultaneously, VGG19- and ResNet101-based standard deep CNN architectures are employed to acquire the deep features which are robust against pose variations. To avoid the ambiguous and unnecessary feature representations, the entropy-controlled features are picked from both low-level and deep representations of features that reduce the dimension of computed features. By merging the selected low-level features with deep features, we obtain a robust joint feature representation. The extensive experiments are conducted on PETA and MIT datasets, and computed results suggest that using the integration of both low-level and deep feature representations can improve the performance as compared to using these feature representations, individually. The proposed framework achieves AU-ROC of 96% and accuracy of 89.3% on the PETA dataset, and AU-ROC of 86% and accuracy of 82% on the MIT dataset. The experimental outcomes show that the proposed J-LDFR framework outperformed the existing gender classification methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call