Abstract

Reinforcement learning (RL) plays an essential role in the field of artificial intelligence but suffers from data inefficiency and model-shift issues. One possible solution to deal with such issues is to exploit transfer learning. However, interpretability problems and negative transfer may occur without explainable models. In this article, we define Relation Transfer as explainable and transferable learning based on graphical model representations, inferring the skeleton and relations among variables in a causal view and generalizing to the target domain. The proposed algorithm consists of the following three steps. First, we leverage a suitable casual discovery method to identify the causal graph based on the augmented source domain data. After that, we make inferences on the target model based on the prior causal knowledge. Finally, offline RL training on the target model is utilized as prior knowledge to improve the policy training in the target domain. The proposed method can answer the question of what to transfer and realize zero-shot transfer across related domains in a principled way. To demonstrate the robustness of the proposed framework, we conduct experiments on four classical control problems as well as one simulation to the real-world application. Experimental results on both continuous and discrete cases demonstrate the efficacy of the proposed method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.