Abstract

Usability challenges and social acceptance of textual input in a context of extended realities (XR) motivate the research of novel input modalities. We investigate the fusion of inertial measurement unit (IMU) control and surface electromyography (sEMG) gesture recognition applied to text entry using a QWERTY-layout virtual keyboard. We design, implement and evaluate our proposed multi-modal solution, MyoBoard, which is an IMU-driven and EMG-based text entry system based on the QWERTY keyboard layout for XR headsets. The user can select characters with a combination of arm movements and hand gestures. MyoBoard employs a light-weight convolutional neural network classifier that can be deployed on a mobile device with insignificant inference time. We demonstrate the practicality of interruption-free text entry in mobile scenarios with MyoBoard, by recruiting 12 participants and by testing three sets of grasp micro-gestures in three scenarios - empty hand text input, input holding a pen, and a large object, such as an umbrella. With MyoBoard, users achieve an average text entry rate of 9.33 words per minute (WPM), 8.76 WPM, and 8.35 WPM for the freehand, pen, and umbrella conditions, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call