Abstract

The non-touch system is a modern approach of computer-interface technology capable of revolutionizing human-computer interaction. The interface allows the user to input data and interact with a human, machine or robot in an uncontrolled environment, treatment or industrial life. However, it is challenging to input data into the machine and interact with man and machine with a variety of complexities such as cluttered environment, gesture tracking, and speed. There are many evolving systems, for example, aerial handwriting, sign language recognition, and finger alphabet recognition require substantial effort for all character learning and overhead processing, thence the accuracy of the classification is reduced. Therefore, this paper proposes a non-touch character writing system that allows users to interact and manage the on-screen virtual keyboards in a secure and healthy way by recognizing few hand gestures. We divide this work into two parts: a) hand gesture recognition; and b) gestural flick input using a virtual keyboard. A user-friendly keyboard interface is displayed on the monitor, which uses a flick input method. A deep learning method with CNN is used to extract the features of a gesture. To determine these features, color segmentation is used to detect the hand; color pixels can be obtained by extracting a particular HSV (hue, saturation, value) and applying threshold masking to the input image. Finally, a support vector machine is used to give a more accurate classification of the hand gestures. The user uses a gestural flick input system to perform non-touch character input and enters the character by viewing the virtual keyboard. The character input is executed based on the recognition of the user’s hand gestures. Character input is evaluated based on the average classification accuracy of hand gestures and character recognition, and the accuracy and speed of input. Then, the system is compared with the state-of-the-art algorithms. The experimental results show that the proposed system can recognize seven typical gesture functions and input characters with 97.93% accuracy, which demonstrate the superiority compared to the state-of-the-art algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call