Abstract
We present a piezoelectric microelectromechanical system (MEMS) unvoiced speech-recognition sensor based on oral airflow, which performs well at (i) recognizing words according to their unique voltage-signal characteristics and (ii) avoiding the effects of external sound noise, body movement, long distance, and occlusion compared to the state of the art. Future work will combine machine learning to show what they say on mobile phones, which is expected to be used to serve people with acquired throat injury or serious diseases, weak bodies, unable to speak loudly or in a public quiet environment.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have