Abstract

Non-contact human-computer interactions (HCI) based on hand gestures have been widely investigated. Here, we present a novel method to locate the real-time position of the hand using the electrostatics of the human body. This method has many advantages, including a delay of less than one millisecond, low cost, and does not require a camera or wearable devices. A formula is first created to sense array signals with five spherical electrodes. Next, a solving algorithm for the real-time measured hand position is introduced and solving equations for three-dimensional coordinates of hand position are obtained. A non-contact real-time hand position sensing system was established to perform verification experiments, and the principle error of the algorithm and the systematic noise were also analyzed. The results show that this novel technology can determine the dynamic parameters of hand movements with good robustness to meet the requirements of complicated HCI.

Highlights

  • Natural, harmonious, and highly efficient human-computer interactions (HCI) have become a trend in the field of human-computer interaction research

  • Hand motion sensing systems can be divided into: data glove-based capturing, attached force-based capturing, surface electromyography (SEMG)-based capturing, optical markers-based capturing, and vision-based capturing [2], which are mainly divided into wearable hand motion sensing and vision-based hand motion sensing methods [1,3]

  • We further developed a real-time hand position algorithm based on the electrostatic information from the human body, designed a new electrostatic electrode array structure and a new solving method to solve the position of the charge source, achieving real-time human hand positioning

Read more

Summary

Introduction

Harmonious, and highly efficient human-computer interactions (HCI) have become a trend in the field of human-computer interaction research. Human hand motion-based human-computer interaction is one of the most important methods used in human-computer interactions [1]. Hand motion sensing systems can be divided into: data glove-based capturing, attached force-based capturing, surface electromyography (SEMG)-based capturing, optical markers-based capturing, and vision-based capturing [2], which are mainly divided into wearable hand motion sensing and vision-based hand motion sensing methods [1,3]. Wearable gesture recognition has high recognition accuracy that can capture the movement details of the hand. Vision-based gesture recognition obtains gesture information in a non-contact manner and can be applied to a wider range of fields. Wearable and vision-based gesture recognition methods have disadvantages, including poor user experience and vulnerability to environmental factors such as illumination [1,4].

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.