Abstract

SmartWeb provides a context-aware user interface to web services, so that it can support the mobile user in different roles, e.g. as a car driver, a motorbiker, or a pedestrian. It provides a symmetric multimodal dialogue system [2] combining speech, gesture, haptic and video input with speech, haptic, video and acoustic output. It goes beyond traditional keyword search engines like Google by delivering higher quality results that are adapted to the mobile user's current task and situation. In mobile situations, users don't want to deal with hypertext lists of retrieved webpages, but simply want an answer to their query. If a desperate driver with a crying and acutely ill child on the backseat asks SmartWeb "Who is the closest paediatrician?" he needs just the name and address of the doctor. Based on SmartWeb's ability to combine various web services, the driver can then ask SmartWeb a follow-up question about route guidance to the doctor's practice. One of the innovative features of SmartWeb is that the user can specify whether he wants a textual or pictorial answer, a video clip or a sound file as a query result.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.