Abstract
AbstractMost work on multimodal interaction in the human computer interaction (HCI) space has focused on enabling a user to use one or more modalities in combination to interact with a system. However, there is still a long way to go towards making human-to-machine communication as rich and intuitive as human-to-human communication. In human-to-human communication, modalities are used individually, simultaneously, interchangeably or in combination. The choice of modalities is dependent on a variety of factors including the context of conversation, social distance, physical proximity, duration, etc. We believe such intuitive multimodal communication is the direction in which human-to-machine interaction is headed in the future. In this paper, we present the insights we have from studying current human-machine interaction methods. We carried out an ethnographic study to observe and study users in their homes as they interacted with media and media devices, by themselves and in small groups. One of the key learning we have from this study is the understanding of the impact of the user’s context on the choice of interaction modalities. The user context factors that influence the choice of interaction modalities include, but are not limited to: the distance of the user from the device/media, the user’s body posture during the media interaction, the user’s involvement level with the media, seating patterns (cluster) of the co-located participants, the roles that each participant plays, the notion of control among the participants, duration of the activity and so on. We believe that the insights from this study can inform the design of the next generation multimodal interfaces that are sensitive to user context, perform a robust interpretation of the interaction inputs and support more human-like multimodal interaction.KeywordsMedia DeviceLiving RoomInput DeviceInteraction SessionUser ContextThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.