Abstract

Developing a user interface (UI) suitable for headset environments is one of the challenges in the field of augmented reality (AR) technologies. This study proposes a hands-free UI for an AR headset that exploits facial gestures of the wearer to recognize user intentions. The facial gestures of the headset wearer are detected by a custom-designed sensor that detects skin deformation based on infrared diffusion characteristics of human skin. We designed a deep neural network classifier to determine the user’s intended gestures from skin-deformation data, which are exploited as user input commands for the proposed UI system. The proposed classifier is composed of a spatiotemporal autoencoder and deep embedded clustering algorithm, trained in an unsupervised manner. The UI device was embedded in a commercial AR headset, and several experiments were performed on the online sensor data to verify operation of the device. We achieved implementation of a hands-free UI for an AR headset with average accuracy of 95.4% user-command recognition, as determined through tests by participants.

Highlights

  • Augmented reality (AR) is one of the hottest issues in the information and communication technology (ICT) industry

  • In the field of augmented reality (AR), developing a user interface (UI) device that is optimized for AR headsets is one of the key challenges

  • Theaddress custom-made sensor requirement monitors the for deformation of the facialuser skininterface of headset users uses devices, we propose a method to detectthe skin deformation a custom-made sensor

Read more

Summary

Introduction

Augmented reality (AR) is one of the hottest issues in the information and communication technology (ICT) industry. In the field of AR, developing a user interface (UI) device that is optimized for AR headsets is one of the key challenges. This is because the headset environment, which differs greatly from a personal computer or mobile phone, makes it difficult to use conventional UIs (e.g., keyboard, mouse, touch screen, etc.). Voice recognition techniques and physiological sensors, such as electrooculography (EOG), electroencephalography (EEG), or electromyography (EMG), are promising alternative means of providing a hands-free UI for AR headsets, optimal sensing devices and methods for implementation of headset UI have yet to be developed

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.