Abstract

The modeling and implementation of sophisticated multimodal software/hardware interfaces is a current scientific challenge of high societal relevance. The main characteristics entailed by these interfaces are being able to interact with people, inferring social, organizational and physical contexts based on sensed data, assisting people with special needs, enhancing elderly health-care assistance, learning and rehabilitation in daily functional activities. Implementing such Human Computer Interaction (HCI) systems is of public utility and profitable for a living science that should simplify user’s accesses to a wide range of social services, either remotely or in a person-to-person setting. The current and future applications foreseen in this highly interdisciplinary field are countless: among these are featured context-aware avatars and robotic devices replacing and/or acting on behalf of humans in high responsibility tasks or time-critical dangerous tasks such as urban emergencies. Other emerging applications concern robot companions for elderly and vulnerable people and intelligent agents for services where there is a shortage of suitable skills or otherwise there is a request of significant investments in training-qualified personnel such as in therapist-based interventions. Given the complexities required by these automated tasks, the approach for developing such devices has to account for a holistic investigation perspective. New cognitive architectures must be foreseen and new cognitive integrations must be exploited in order to take advantage of the knowledge derived from the analysis of human behaviors across different contexts. At the stake, there is the need to develop a deep understanding of the emotional and intentional cognitive processes underpinning human interactions. Inherently new insights must be deployed for designing complex–autonomous systems, which are required to be able to feel human emotional and intentional states; cooperatively adapt to them through a socially ethical and sensible conduct; and exhibit coherent vocal, visual and gestural affordances. The present Special Issue investigates these topics, by gathering new experimental data and theories across a spectrum of disciplines, in order to identify the metastructures underlying these phenomena. This effort hopefully will stimulate, on the one hand, the conception of new mathematical models for representing data, reasoning and learning. On the other hand, it will produce new psychological and computational approaches with respect to the existing cognitive frameworks and algorithmic solutions. Enabling a consistent progress toward the implementation of a human automaton level of intelligence is crucial for developing such HCI systems and enhancing the quality of life of people addressing their current and future societal needs. The topics proposed by the present special issue are interdisciplinary and cover issues related to several areas of research. Let us report them: behavioral analysis of interactions; mathematical models for representing data, reasoning and learning; social signal and context effects; algorithmic solutions for socially believable robots and A. Esposito (&) Department of Psychology and IIASS, Seconda Universita di Napoli, Caserta, Italy e-mail: iiass.annaesp@tin.it; anna.esposito@unina2.it

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call