Abstract
Digital twin (DT) systems that replicate the physical world digitally are powerful tools for monitoring physical systems and evaluating algorithms, but current DT systems are commonly not applicable for robotic deployment and investigation. Meanwhile, current 3-D simulation-based robotic platforms do not model the dynamics of the physical world on-the-fly as done in DT systems, limiting their potential for the development of robotics in challenging environments. To tackle this issue, we propose the first robot-centered smart DT framework, namely, Terra, to facilitate the deployment of robots in challenging environments. The proposed Terra framework introduces a comprehensive DT representation to encode the useful real-time dynamics of both the physical world and the robot agent deployed therein. A multiview multimodality perception module is further devised for Terra to obtain high-level semantics and deliver a precise description of the current status of the environment and the robot agent. By mapping the perceived results to the virtual replica of the physical environment, Terra actively updates the action policy and sends it back to the agent, forming an integral and real-time information feedback loop. In practice, to help demonstrate the effectiveness and feasibility of the proposed framework, we deliberately set up a challenging unordered physical environment with many obstacles and a very simple robot aiming to fulfill a navigation task. Empirical results show that the proposed Terra framework successfully facilitates the robot to accomplish the task without causing hazards.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have