Abstract

Robot visual servoing controls the motion of a robot through real-time visual observations. Kinematics is a key approach to achieving visual servoing. One key challenge of kinematics-based visual servoing is that it requires time-varying parameter configuration throughout the entire process of one task. Parameter tuning is also necessary when applying to different tasks. The existing work on parameter tuning either lacks adaptation or cannot automate the tuning of all parameters. Meanwhile, the transferability of existing methods from one task to another is low. This work develops a Deep Reinforcement Learning (DRL) framework for robot visual servoing, which can automate all parameters tuning for one task and across tasks. In visual servoing, forward kinematics focuses on motion speed, while inverse kinematics focuses on the smoothness of motion. Therefore, we develop two separate modules in the proposed DRL framework. One tunes time-varying Forward Kinematics parameters to accelerate the motion, and the other tunes the Inverse Kinematics parameters to ensure smoothness. Moreover, we customize a knowledge transfer method to generalize the proposed DRL models to various robot tasks without reconstructing the neural network. We verify the proposed method on simulated robot tasks. The experimental results show that the proposed method outperforms the state-of-the-art methods and manual parameter configuration in terms of movement speed and smoothness in one task and across tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call