Abstract

Silent communication devices are necessary in situations when audio voice signals cannot be relied upon. This could be due to physical voice impairments of the sender, or being in an environment in which sound cannot be transferred reliably or securely. A wearable ultrasensitive strain sensor worn on the throat may be able to detect small muscle movements and vibrations which can be mapped to the intended phonations. Feedback can then passed back to the wearer to inform if the speech has been correctly predicted. In this paper we propose a wearable patch which can be used for silent communications and demonstrate a proof-of-concept graphene-based strain gauge sensor which, combined with machine learning algorithms, can record and decode non-audio signals. The sensor detects small throat movements when someone speaks, or attempts to speak, as changes in the resistance of the device. These are passed to machine learning algorithms which makes predictions on what is being said. A dataset of 15 unique words and four movements, each with ten repetitions from two participants, was developed and used for the training of the machine learning algorithms. The results demonstrate the ability for such sensors to be able to predict spoken words. We produced a prediction accuracy rate of 51% on the word dataset and 82% on the movements dataset. We further explore haptic forms of feedback which can be incorporated into a smart wearable patch. This provides an initial demonstration of a wearable silent communications device.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call