Abstract

Presents a new framework of multi-agent reinforcement learning to acquire cooperative behaviors by generating and coordinating each learning goal interactively among agents. One of the main goals of artificial intelligence is to realize an intelligent agent that behaves autonomously by its sense of values. Reinforcement learning (RL) is the major learning mechanism for the agent to adapt itself to various situations of an unknown environment flexibly. However, in a multi-agent system environment that has mutual dependency among agents, it is difficult for a human to set up suitable learning goals for each agent, and, in addition, the existing framework of RL that aims for egoistic optimality of each agent is inadequate. Therefore, an active and interactive learning mechanism is required to generate and coordinate each learning goal among the agents. To realize this, first we propose to treat each learning goal as a reinforcement signal (RS) that can be communicated among the agents. Second, we introduce motivation rules to integrate the RSs communicated among the agents into a reward value for RL of an agent. Then we define cooperative rewards as learning goals with mutual dependency. Learning experiments for two agents with various motivation rules are performed. The experimental results show that several combinations of motivation rules converge to cooperative behaviors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call