Abstract

Image-based visual servoing poses a significant challenge for robotic systems, as it involves detecting the object and controlling the robot arm based on image feedback. These tasks are further complicated by various interferences such as changes in ambient lighting, distractions, and background clutter. Recent research suggests that reinforcement learning is a promising approach to learning efficient control policies for such tasks. In this paper, we propose a data-driven approach for closed-loop visual servoing based on a reinforcement learning algorithm that does not require any prior knowledge of the task object or intrinsic camera parameters. Our method utilizes a convolutional neural network for object detection and a servoing strategy that enables the robot to determine the relative camera motion and position the camera at the desired pose. Our experimental results demonstrate that the proposed approach successfully steers the camera using only a single template image of the task object.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call