Abstract
<h3>Research Objectives</h3> We propose a mobile application that recognizes particular programmed movements of the device to elicit particular text-to-speech phrases. For people unable to speak and with limited mobility to use keyboards or tablets, this enables faster social interaction which can greatly improve their quality of life. <h3>Design</h3> Initial prototype development and testing. <h3>Setting</h3> General community. <h3>Participants</h3> Six healthy subjects participated in repeated data collection of 11 distinct gestures for testing. <h3>Interventions</h3> Not applicable. <h3>Main Outcome Measures</h3> A cross-platform mobile application is integrated with a deep learning LSTM model that can recognize distinct gestures performed by the user after two seconds of recording, which triggers a selected text-to-speech auditory response. <h3>Results</h3> Eleven distinct gestures such as waving, horizontal line, vertical line, etc. data were collected through the accelerometer. Data were partitioned into three parts: training, validation and test. Training and validation data were used for training the model and the test set was used for evaluation. The model achieved 96% accuracy on average, with the errors primarily between one pair of challenging gestures to distinguish. <h3>Conclusions</h3> The application is an early, readily shared demonstration of gesture-to-speech. Further development is intended to enable additional gestures and improve recognition accuracy and speed, with the goal of enabling more efficient communication for individuals unable to speak. <h3>Author(s) Disclosures</h3> None.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.