Abstract

In multiagent systems, social dilemmas often arise whenever there is a competition over the limited resources. The major challenge is to establish cooperation among intelligent virtual agents for solving the situations of social dilemmas. In humans, personality and emotions are the primary factors that lead them toward a cooperative environment. To make agents cooperate, they have to become more like humans, that is, believable. Therefore, we hypothesize that emotions according to the personality give birth to believability, and if believability is introduced into agents through emotions, it improves their survival rate in social dilemma situations. The existing researches have introduced different computational models to introduce emotions in virtual agents, but they lack emotions through neurotransmitters. We have proposed a neurotransmitters-based deep Q-learning computational model in multiagents that is a suitable choice for emotion modeling and, hence, believability. The proposed model regulates the agents' emotions by controlling the virtual neurotransmitters (dopamine and oxytocin) according to the agent's personality. The personality of the agent is introduced using OCEAN model. To evaluate the proposed system, we simulated a survival scenario with limited food resources in different experiments. These experiments vary the number of selfish agents (higher neuroticism personality trait) and the selfless agents (higher agreeableness personality trait). Experimental results show that by adding the selfless agents in the scenario, the agents develop cooperation, and their collective survival time increases. Thus, to resolve the social dilemma problems in virtual agents, we can make agents believable through the proposed neurotransmitter-based emotional model. This proposed work may help in developing nonplayer characters (NPCs) in games.

Highlights

  • Intelligent agents are being employed in the field of robotics [1], games [2], entertainment [3], education [4], healthcare [5], customer services [6], and many more

  • If each herdsman increases his number of sheep for his benefit, the grass is soon scarce in the pasture. e literature suggests that cooperation is necessary among the people to resolve social dilemmas [14–16]. erefore, to solve the social dilemma among AI-controlled virtual agents, these agents must have believability so that cooperation and coordination are developed among them [17]

  • Agreeableness and Neuroticism are best suited for the situations of social dilemmas in virtual agents. ese two personality traits made the agents selfish and selfless contributing to the believability in virtual agents

Read more

Summary

Introduction

Intelligent agents are being employed in the field of robotics [1], games [2], entertainment [3], education [4], healthcare [5], customer services [6], and many more. A multiagent system (MAS) is a group of autonomous agents interacting in the same environment to achieve a common goal [7] In these multiagent systems (MASs), situation of Journal of Healthcare Engineering social dilemmas often arises. In Hardin’s “Tragedy of the Commons” [13], a social dilemma in the survival scenario, a common pasture, is shared among a community of herdsmen to graze sheep. Broekens et al [56] proposed an emotional model of joy, distress, hope, and fear using reinforcement learning for a single agent in the maze scenario. Researchers have investigated the introduction of emotions among two agents for establishing cooperation in social dilemma scenarios [11, 58, 59]. Yu et al [58] proposed a double-layered framework with emotional multiagent reinforcement learning that provided agents with emotional and cognition capabilities to induce cooperation.

Objectives
Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call