Abstract
The use of biometric features for the surveillance and recognition of certain classes, such as gender, age, and race, is widespread and popular among researchers. Various studies have focused on gender recognition using facial, gait, or audial features. This study aimed to recognize people's gender by analyzing their hand images using a deep learning model. Before training, the images were subjected to several preprocessing stages. In the first stage, the joint points on either side of the hand were detected using the MediaPipe framework. Using the detected points, the orientation of the hands was corrected and rotated so that the fingers pointed upwards. In the last preprocessing stage, the images were smoothened while the edges were preserved by a guided filter. The processed images were used to train and test different versions of the ResNet model. The results were compared with those of some other studies on the same dataset. The proposed method achieved 96.67% recognition accuracy.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Engineering, Technology & Applied Science Research
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.