Abstract

Motion-based hand gesture is an important scheme to allow users to invoke commands on their smartphones in an eyes-free manner. However, the existing scheme is facing some problems. On the one hand, the expression ability of one single gesture is limited. As a result, a gesture set consisting of multiple gestures is typically adopted to represent different commands. Users must memorize all gestures in order to make interaction successfully. On the other hand, the design of gestures needs to be complicated to express diverse intensions. However, complex gestures are difficult to learn and remember. In addition, complex gestures set a high recognition barrier to smart APPs. This leads to an imbalance problem. Different gestures have different recognition accuracy levels, which may result in instability of recognition precision in practical applications. To address these problems, this paper proposes a novel scheme using binary motion gestures. Only two simple gestures are required to express bit “0” and “1,” and rich information can be expressed through the permutation and combination of the two binary gestures. Firstly, four kinds of candidate binary gestures are evaluated for eyes-free interactions. Then, an online signal cutting and merging algorithm is designed to split accelerometer signals sequence into multiple separate gesture signal segments. Next, five algorithms, including Dynamic Time Warping (DTW), Naive Bayes, Decision Tree, Support Vector Machine (SVM), and Bidirectional Long Short-Term Memory (BLSTM) Network, are adopted to recognize these segments of knock gestures. The BLSTM achieves the top performance in terms of both recognition accuracy and recognition imbalance. Finally, an Android application is developed to illustrate the usability of the proposed binary gestures. As binary gestures are much simpler than traditional hand gestures, they are more efficient and user-friendly. Our scheme eliminates the imbalance problem and achieves high recognition accuracy.

Highlights

  • Eyes-free interaction is a method of controlling mobile devices without having to look at the device [1]

  • Support Vector Machine (SVM) constructs a hyperplane in a high-dimensional space to separate two class of gesture, single-action, and double-action gestures

  • A metric defined in equation (11) is used to evaluate the recognition accuracy

Read more

Summary

Introduction

Eyes-free interaction is a method of controlling mobile devices without having to look at the device [1]. In [2], a digital calculator that operated with fingers on touch screens is developed. Is method utilizes taps for digits input and uses swipes for other operations. In [3], a nonvisual text entry method that uses the 6 bit Braille character encoding is presented. Siri is one of the most prominent examples of a mobile voice interface. Another important way is to use a motion-based hand gesture [5]. To command a smartphone to execute a task, a user needs to perform a hand gesture with that phone in hand. To command a smartphone to execute a task, a user needs to perform a hand gesture with that phone in hand. e type of gesture is recognized through analysing data samples captured by motion sensors, such as accelerometers, gyroscopes, and orientation sensors

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call