Demonstrative learning in virtual reality (VR) is a pivotal learning strategy for knowledge transfer for embodied agents. While existing studies have extensively explored agents’ knowledge transfer through self-demonstrative learning (SDL) or teacher-demonstrative learning (TDL), there has been limited focus on a system that integrates both paradigms. This paper proposes a comprehensive system that combines the SDL paradigm with the TDL paradigm in VR from a top-down perspective. The system involves using directed probabilistic graphs (DPGs) for knowledge representation, constructing detectors and actuators based on object fluents and atomic actions, representing knowledge acquired from both learning paradigms on the DPGs, and incorporating knowledge integration and visualization. Through system evaluation, we show the advantages of integrating two demonstrative learning paradigms, including increased learning efficiency, mitigating demonstrator's deficiencies, and more logical task execution. The study also reveals that a dynamically decreasing fusion factor with the learning progresses, performs well across different correct percentages of teacher demonstrations. Additionally, we show a more decentralized demonstration in the middle of the agent's learning progress maximizing learning efficiency when demonstrating only a few atomic actions. Finally, we assess the system's generalizability and delineate its boundaries. With the ongoing development and the increasing availability of human data in VR, we anticipate that our system can be applied to future scenarios of efficient human-agent knowledge transfer in human-agent symbiosis.
Read full abstract