Abstract

Autonomy is crucial in cooperation. The complexity of HRI scenarios requires autonomous robots able to exploit their superhuman computations (based on DNN, Machine Learning techniques and Big Data) in a trustworthy way. Trustworthiness is not only a matter of accuracy, privacy or security, but it is becoming more and more a matter of adaptation to humans agency. As claimed by Falcone and Castelfranchi, autonomy means the possibility of dislaying or providing an unexpected behavior (including refusal) that departs from a requested (agreed upon or not) behavior. In this sense, the autonomy to decide how to adopt a task delegated by the user, with respect to her/his own real needs and goals, distinguishes intelligent and trustworthy robots from highly performing robots. This kind of smart help can be provided only by cognitive robots able to represent and ascribe mental states (beliefs, goals, intentions, desires etc.) to their interlocutors. The mental states attribution can be the result of complex reasoning mechanisms or can be fast and automatic, based on scripts, roles, categories or stereotypes typically exploited by humans every time they interact in everyday life. In all these cases, robots that build and use cognitive models of humans (that have a Theory of Mind of their interlocutors), have to operate also a meta-evaluation of their own predictive skills to build those models. Robots have to be endowed with the capability to self-trust their skills to interpret the interlocutors and the context, for producing smart and effective decisions towards humans. After exploring the main concepts that make collaboration between humans and robots trustworthy and effective, we present the first of a series of experiments draw for testing different aspects of a designed cognitive architecture for trustworthy HRI. This architecture, based on consolidated theoretical principles (theory of social adjustable autonomy, theory of mind, theory of trust) has the main goal to build cognitive robots that provide smart, trustworthy collaboration, every time a human requires their help. In particular, the experiment has been designed in order to demonstrate how the robot’s capability to learn its own level of self-trust on its predictive abilities in perceiving the user and building a model of her/him, allows it to establish a trustworthy collaboration and to maintain a high level of user’s satisfaction, with respect to the robot’s performance, also when these abilities progressively degrade.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call