Abstract
Silent Speech Interfaces (SSI) are sensor-based communication systems in which a speaker articulates normally but does not activate their vocal chords, creating a natural user interface that does not disturb the ambient audio environment or compromise private content, and may also be used in noisy environments where a clean audio signal is not available. The SSI field was launched 2010 in a special issue of Speech Communications, where systems based on ultrasound tongue imaging, electromyography, and electromagnetic articulography were proposed. Today, although ultrasound-based SSIs can achieve Word Error Rate scores rivaling those of acoustic speech recognition, they have yet to reach the marketplace due to performance stability problems. In recent years, numerous approaches have been proposed to address this issue, including better acquisition hardware; improved tongue contour tracking; Deep Learning analysis; and the association of ultrasound data with a real time 3D model of the tongue. After outlining the history and basics of SSIs, the talk will present a summary of recent advances aimed at bringing SSIs out of the laboratory and into real world applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.