Abstract

Hand gestures offer humans a natural way to interact with computers for a variety of applications. However, factors such as the complexity of hand gesture patterns, size differences, hand posture, and environmental lighting can affect the performance of hand gesture recognition algorithms. Recent advances in deep learning have greatly improved the performance of image recognition systems. In particular, the deep convolutional neural network has demonstrated superior performance in terms of image representation and classification, compared to conventional machine learning methods. This article proposes a two-technique comparison, a proposed deep-convolution neural network and transfer learning with the pre-trained model mobilnetv2 , for hand gestures recognition of American sign language. Both models are trained and tested using 1815 images segmented by colour and with black background, and its static hand gestures for five volunteers, which incorporate variations in features such as scale, lighting, and noise. The results show that,the proposed CNN model achieved a classification recognition accuracy of 98.9%. with a 2% improvement over the convolutional neural network model enriched using transfer learning techniques 97.06%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call