Abstract
People often have reciprocal habits, almost automatically responding to others' actions. A robot who interacts with humans may also reciprocate, in order to come across natural and be predictable. We aim to facilitate decision support that advises on utility-efficient habits in these interactions. To this end, given a model for reciprocation behavior with parameters that represent habits, we define a game that describes what habit one should adopt to increase the utility of the process. This paper concentrates on two agents. The used model defines that an agent's action is a weighted combination of the other's previous actions (reacting) and either i) her innate kindness, or ii) her own previous action (inertia). In order to analyze what happens when everyone reciprocates rationally, we define a game where an agent may choose her habit, which is either her reciprocation attitude (i or ii), or both her reciprocation attitude and weight. We characterize the Nash equilibria of these games and consider their efficiency. We find that the less kind agents should adjust to the kinder agents to improve both their own utility as well as the social welfare. This constitutes advice on improving cooperation and explains real life phenomena in human interaction, such as the societal benefits from adopting the behavior of the kindest person, or becoming more polite as one grows up.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.