Abstract

Reliable high-level fusion of several input modalities is hard to achieve, and(semi-)automatically generating it is even more difficult. However, it is important to address in order to broaden the scope of providing user interfaces semi-automatically.Our approach starts from a high-level discourse model created by a human interaction designer. It is modality independent, so an annotated discourse is semiautomatically generated, which influences the fusion mechanism. Our high-level fusion checks hypotheses from the various input modalities by use of finite state machines. These are modality independent, and they are automatically generated from the given discourse model. Taking all this together, our approach provides semi-automatic generation of high-level fusion. It currently supports input modalities graphical user interface (simple) speech, a few hand gestures, and a bar code reader.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.