Abstract

Finger gesture input is emerged as an increasingly popular means of human-computer interactions. In this demo, we propose iPand, an acoustic sensing system that enables finger gesture input on the skin, which is more convenient, user-friendly and always accessible. Unlike past works, which implement gesture input with dedicated devices, our system exploits passive acoustic sensing to identify the gestures, e.g. swipe left, swipe right, pinch and spread. The intuition underlying our system is that specific gesture emits unique friction sound, which can be captured by the microphone embedded in wearable devices. We then adopt convolutional neural network to recognize the gestures. We implement and evaluate iPand using COTS smartphones and smartwatches. Results from three daily scenarios (i.e., library, lab and cafe) of 10 volunteers show that iPand can achieve the recognition accuracy of 87%, 81% and 77% respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.