Assembly positioning by visual servoing (VS) is a basis for autonomous robotic assembly. In practice, VS control suffers potential stability and convergence problems due to image and physical constraints, e.g., field of view constraints, image local minima, obstacle collisions, and occlusion. Therefore, this article proposes a novel deep reinforcement learning-based hybrid visual servoing (DRL-HVS) controller for motion planning of VS tasks. DRL-HVS controller takes current observed image features and camera pose as inputs, and the core parameters of hybrid VS are dynamically optimized using a deep deterministic policy gradient (DDPG) algorithm to obtain an optimal motion scheme, considering image/physical constraints and robot motion performance. In addition, an adaptive exploration strategy is proposed to further improve the training efficiency by adaptively tuning the exploration noise parameters. In this way, the offline pretrained DRL-HVS controller in the virtual environment, where the DDPG actor–critic network is continuously optimized, can be quickly deployed to a real robot system for real-time control. Experiments based on an eye-in-hand VS system are conducted with a calibrated HIKVISION RGB camera mounted on the end-effector of a GSK-RB03A1 six degree-of-freedom (6-DoF) robot. Basic VS task experiments show that the proposed controller achieves better performance than the existing methods: the servoing time is 24% smaller than that of the five-dimensional VS method, a 100% success rate with the perturbed ranges of the initial position within 25 mm for translation and 20° for rotation, and a 48% efficiency improvement. Moreover, a planetary gear component assembly process case study, where the robot aims to automatically put the gears on the gear shafts, is conducted to demonstrate the applicability of the proposed method in practice.
Read full abstract