Abstract

Mechanical hand is widely used in industrial and rehabilitation fields. Good interaction will enable it to execute human ideas quickly and complete related tasks with humans well. Different environments will affect its performance. Therefore, the modality diversity and environmental adaptability of human–machine interaction are important.In this study, a visual modality was developed based on Mediapipe framework and Tensorflow lite. An audio modality was developed based on Chinese finger-guessing game terms and non-specific voice recognition technology. A surface electromyography (sEMG) modality was developed based on machine learning, and a touch interface was developed based on serial touch screen. Furthermore, the user can choose different modalities according to the environment through the touch screen.The experimental results show that the average accuracies of visual modality, audio modality, and sEMG modality are 98.2463%±1.5057%, 97.3132%±0.692%, and 96.3454%±2.0108%, respectively. On a computer equipped with Intel (R) Core (TM) i5-1137G7 CPU, the execution time of visual modality, audio modality, and sEMG modality for a single action in Python 3.6 are 0.03 s, 0.81 s, and 0.19 s, respectively.Compared with several existing methods, the proposed method has rich modalities, better accuracy, well real-time performance of motion recognition, and real-time modality switching function.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call