Abstract

Recognizing and distinguishing the behavior and gesture of a user has become important owing to an increase in the use of wearable devices, such as a smartwatch. This study is aimed at proposing a method for classifying hand gestures by creating sound in the nonaudible frequency range using a smartphone and reflected signal. The proposed method converts the sound data, which has been reflected and recorded, into an image within a short time using short-time Fourier transform, and the obtained data are applied to a convolutional neural network (CNN) model to classify hand gestures. The results showed classification accuracy for 8 hand gestures with an average of 87.75%. Additionally, it is confirmed that the suggested method has a higher classification accuracy than other machine learning classification algorithms.

Highlights

  • With the advancement in IT technologies, the use of wearable devices, such as a smartwatch or IoT-based devices, is becoming common

  • In ER [3], which is one of the studies based on gesture recognition using sound wave, sound waves corresponding to a nonaudible frequency are obtained using a built-in microphone and speaker of a smartphone, and the behaviors are classified based on the Doppler effect

  • A method was presented for finding the frequency responses over time for data classification using STFT and applying them to the convolutional neural network (CNN) models

Read more

Summary

Introduction

With the advancement in IT technologies, the use of wearable devices, such as a smartwatch or IoT-based devices, is becoming common. Google Soli [1], which uses radio frequency signals, and Okuli [2], which uses an optic sensor, are aimed at controlling devices through gesture recognition. Both of these devices require additional parts, such as an optical sensor or a radio frequency chip. In ER [3], which is one of the studies based on gesture recognition using sound wave, sound waves corresponding to a nonaudible frequency are obtained using a built-in microphone and speaker of a smartphone, and the behaviors are classified based on the Doppler effect. Nonaudible frequencies are continuously obtained using the built-in speaker and microphone, and gestures are recognized through the reflected signals based on the Doppler effect.

Related Work
Our Approach
Performance Evaluation
Gesture Dataset
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.