Abstract

This paper presents a framework, called the knowledge co-creation framework (KCF), for heterogeneous multiagent robot systems that use a transfer learning method. A multiagent robot system (MARS) that utilizes reinforcement learning and a transfer learning method has recently been studied in realworld situations. In MARS, autonomous agents obtain behavior autonomously through multi-agent reinforcement learning and the transfer learning method enables the reuse of the knowledge of other robots’ behavior, such as for cooperative behavior. Those methods, however, have not been fully and systematically discussed. To address this, KCF leverages the transfer learning method and cloud-computing resources. In prior research, we developed ontology-based inter-task mapping as a core technology for hierarchical transfer learning (HTL) method and investigated its effectiveness in a dynamic multi-agent environment. The HTL method hierarchically abstracts obtained knowledge by ontological methods. Here, we evaluate the effectiveness of HTL with a basic experimental setup that considers two types of ontology: action and state.

Highlights

  • Actual multi-agent robot systems (MARSs) have recently been deployed in real-world situations

  • multi-agent reinforcement learning (MARL) is a mechanism for implementing a posteriori cooperation among agents, which can behave adaptively in a dynamic environment even when they are not provided with specific control policies

  • Difference in convergence steps (DCS) is defined as follows, and we define the ratio of DCS (RDCS)

Read more

Summary

INTRODUCTION

Actual multi-agent robot systems (MARSs) have recently been deployed in real-world situations. Cloud robotics may increase the utility of MARSs because the robots gain access to broader knowledge, vast computing resources, and external functions This should be helpful for achieving practical implementation of MARSs with MARL. The HTL method enables inter-task mapping (ITM) by using ontology among heterogeneous agents This allows autonomous robots and virtual agents to reuse knowledge from other types of robots and agents. We describe experiments that confirm the HTL enables reuse of knowledge by using action and state ontologies to mediate among heterogeneous MARSs. The rest of the paper is organized as follows. When the values of available actions are the same or are equal to default value, the Boltzmann distribution is used to select the action at random

Transfer Learning in Reinforcement Learning
Reinforcement Learning
Heterogeneity of Robots and Agents
Ontology-based ITMs
Method for Transfer of Knowledge
Pursuit Game
Difference in Tasks
Heterogeneity of Agents
Experimental Conditions
EXPERIMENTAL RESULTS AND DISCUSSION
Results for Self-transfer
Results with Different Action Spaces
Results with Different State Spaces
Results with Heterogeneous Conditions
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.