Abstract

A-mode ultrasound has the advantages of high resolution, easy calculation and low cost in predicting dexterous gestures. In order to accelerate the popularization of A-mode ultrasound gesture recognition technology, we designed a human-machine interface that can interact with the user in real-time. Data processing includes Gaussian filtering, feature extraction and PCA dimensionality reduction. The NB, LDA and SVM algorithms were selected to train machine learning models. The whole process was written in C++ to classify gestures in real-time. This paper conducts offline and real-time experiments based on HMI-A (Human-machine interface based on A-mode ultrasound), including ten subjects and ten common gestures. To demonstrate the effectiveness of HMI-A and avoid accidental interference, the offline experiment collected ten rounds of gestures for each subject for ten-fold cross-validation. The results show that the offline recognition accuracy is 96.92% ± 1.92%. The real-time experiment was evaluated by four online performance metrics: action selection time, action completion time, action completion rate and real-time recognition accuracy. The results show that the action completion rate is 96.0% ± 3.6%, and the real-time recognition accuracy is 83.8% ± 6.9%. This study verifies the great potential of wearable A-mode ultrasound technology, and provides a wider range of application scenarios for gesture recognition.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.