Abstract

Limited by size, shape, and other factors, it is rather inconvenient to interact with new smart devices by traditional methods. Acoustic-based methods following a machine learning approach have been put forward to resolve this problem in previous works. But they possess limitations of heavy training overhead, low performance for unseen users, and intensive computation cost. Following our previous work in this area, we further overcome shortcomings of existing work and propose a lightweight and zero-shot text-entry system for unseen users based on acoustic sensing. The key novelty of this work is proposing a new model training strategy including dataset construction and augmentation methods to effectively enhance generalization ability of a simple learning model with as few training data as possible, based on our insight into the problem. We design and implement a real-time Android application system called <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">EchoWrite 2.0</monospace> to validate our idea with extensive experiments. Results show that <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">EchoWrite 2.0</monospace> can recognize digits, English letters, and words with an accuracy of 85.3%, 73.2%, and 96.9%, respectively, for unseen users without providing any data to the learning model. The comparison with related work in different aspects shows overall superiority of <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">EchoWrite 2.0</monospace> .

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call