Abstract

The emergence of social robots has created new opportunities for them to coexist with humans, but only if they can imitate human-like social behavior. Previous research has examined various robot social cues, such as emotions, gestures, and eye contact. However, an area that has been under-researched is the concept of implicit group norms, which are unwritten rules dictating the expected behavior of group members and can differ across different groups. By improving the ability of robots to behave in expected ways, we hope to promote greater acceptance of robots among humans. In this study, we propose a group norm-aware decision-making model to help robots adapt to group norms, which we evaluated in a human–agent experiment based on the ultimatum game. In this scenario, the gains and losses of one group member affect everyone else. Our results demonstrate that a group norm-aware decision-making agent promotes fairer distributions of benefits among group members, enhancing mutual benefit compared with an agent that does not consider group norms. This study provides a solid foundation for further research in developing social robots that are more adaptable and acceptable to humans. Additionally, our proposed model sets the stage for future robot experiments, ultimately leading to the emergence of more equitable and empathetic human–robot interactions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call