Abstract

Gesture recognition technology has become increasingly popular in various fields due to its potential for controlling consumer devices, robotics, and even translating sign language. However, there is still a lack of practical applications in daily life, especially for consumer products. Previous studies primarily focused on gesture recognition and did not delve into exploring the potential applications of gesture recognition in human interaction. This work aims to address this gap by deploying a gesture recognition model into practical applications and exploring its potential in human interaction. The proposed model combines the use of 3D convolutional neural networks (3DCNN) and long-short-term-memory networks (LSTM) for spatiotemporal feature learning. Six common interactive dashboard gestures were extracted from the 20BN-Jester dataset to train the model. The validation accuracy reached 80.47% after ablation studies. Then, two test cases were conducted to evaluate the effectiveness of the model in controlling Spotify and the Artificial Intelligence of Things (AIoT) smart classroom. The response time for each recognition was approximately 100 to 150 milliseconds. The proposed research aims to develop a simple yet efficient model for hand gesture recognition, which has the potential to be applied in various fields beyond just music and smart classrooms. With the growing popularity of gesture recognition technology, this study contributes to the advancement of practical applications and product innovations that can be integrated into daily life, thereby making it more accessible to the general public.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call