Telepresence robots are gaining more popularity as a means of remote communication and human–robot interaction, allowing users to control and operate a physical robot remotely. However, controlling these robots can be challenging due to the inherent delays and latency in the communication systems. In this research paper, we propose a novel hybrid algorithm exploiting deep reinforcement learning (DRL) with a dueling double-deep Q-network (DDQN) and a gated recurrent unit (GRU) to assist and maneuver the telepresence robot during the delayed operating signals from the operator. The DDQN is used to learn the optimal control policy for the telepresence robot in a remote healthcare environment during delayed communication signals. In contrast, the GRU is employed to model the control signals’ temporal dependencies and handle the variable time delays in the communication system. The proposed hybrid approach is evaluated analytically and experimentally. The results demonstrate the approach’s effectiveness in improving telepresence robots’ tracking accuracy and stability performance. Multiple experiments show that the proposed technique depicts improved controlling efficiency with no guidance from the teleoperator. It can control and manage the operations of the telepresence robot during the delayed communication of 15 seconds by itself, which is 2.4% better than the existing approaches. Overall, the proposed hybrid approach demonstrates the potential implementation of RL and deep learning techniques in improving the control and stability of the telepresence robot during delayed operating signals from the operator.
Read full abstract