Abstract

Searching for information through the World Wide Web is a common task nowadays. However, it is mostly a textual experience. This work proposes and implements a 3D environment for representing Web search results in a three-dimensional (3D) environment that is controlled by gestures and speech recognition. In order to do that, two scenarios are required: a server-side solution based on XML Web Services that allows being consumed by clients that are searching information on the Web; and one client-side application that allows being executed as a graphic user interface in desktop by using speech and gesture commands. This latter application is in charge of graphically modeling the 3D environment according to the Web search results. A prototype of the proposed system, which represents a set of Web search results in a 3D environment, was implemented. The 3D scenario is handled by gestures and voice commands; the 3D environment is based on the management of some 3D transformations such as rotation and scaling of the inner faces of a torus. An evaluation of the prototype in search contexts suggests that there is a high level of user acceptance of the proposed multimedia model. Testers have said that the prototype has an intuitive graphical user interface, and that both speech and gesture recognition have been a highly functional alternative for tasks related with Web searching. Still, there are some aspects to improve, especially in the speech recognition support of different accents. Overall, the multimedia prototype met the expectations from a sample of end users, showing that such interfaces have great potential.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call