Abstract

This paper introduces improved fission rule depending on SNNR (Signal Plus Noise to Noise Ratio) and fuzzy value for simultaneous multi-modality, and suggests the Fusion User Interface (hereinafter, FUI) including a synchronization between audio-gesture modalities, based on the embedded KSSL (Korean Standard Sign Language) recognizer using the WPS (Wearable Personal Station for the next generation PC) and Voice-XML. Our approach fuses and recognizes 62 sentential and 152 word language models that are represented by speech and KSSL, then translates recognition results that is fissioned according to a weight decision rule into synthetic speech and visual illustration (graphical display by HMD-Head Mounted Display) in real-time. The experimental results, average recognition rates of the FUI for 62 sentential and 152 word language models were 94.33% and 96.85% in clean environments (e.g. office space), and 92.29% and 92.91% were shown in noisy environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call