Abstract
The increased use of Virtual and Augmented Reality based systems necessitates the development of more intuitive and unobtrusive means of interfacing. Over the last years, Electromyography (EMG) based interfaces have been employed for interaction with robotic and computer applications, but no studies have been carried out to investigate the continuous decoding of the effects of human motion (e.g., manipulated object behavior) in simulated and virtual environments. In this work, we compare the object motion decoding accuracy of an EMG based learning framework for two different dexterous manipulation scenarios: i) for simulated objects handled by a teleoperated model of a hand within a virtual environment and ii) for real, everyday life objects manipulated by the human hand. To do that, we utilize EMG activations from 16 muscle sites (9 on the hand and 7 on the forearm). The object motion decoding is formulated as a regression problem using the Random Forests methodology. A 5-fold cross validation procedure is used for model assessment purposes and the feature variable importance values are calculated for each model. The decoding accuracy for the real world is considerably higher than the virtual world. Each of the objects examined had a single manipulation motion that offered the highest estimation accuracy across both worlds. This study also shows that it is feasible to decode the object motions using just the myoelectric activations of the muscles of the forearm and the hand. This is particularly surprising since simulations lacked haptic feedback and the ability to account for other dynamic phenomena like friction and contact rolling.
Highlights
In recent years, Virtual Reality (VR) and Augmented Reality (AR) systems have been employed for a plethora of applications in entertainment, research, and education
We show that it is feasible to decode the object motions performed in a virtual world, despite the lack of haptic feedback and without accounting for other dynamic phenomena, using just the myoelectric activations of the muscles of the forearm and the hand
The presented accuracies are for the models that are trained using the myoelectric activations from the forearm and hand separately
Summary
Virtual Reality (VR) and Augmented Reality (AR) systems have been employed for a plethora of applications in entertainment, research, and education. Traditional methods of interaction with VR/AR systems include handheld controllers and speech or vision based gesture recognition devices These methods have the following limitations: i) they require an observable gesture, that can be awkward in social situations, ii) they depend on environmental factors like ambient light and background noise, iii) the vision based systems are prone to occlusions [1], and iv) hand-held controllers are inadequate for intuitive and non-fatiguing interaction with the device [2]. Such interfaces only provide kinematic information ignoring dynamics, i.e. the effort put in by the user is not captured. This prompts a need for an interface that provides embodied interactions with dynamic and unstructured environments
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.