Abstract

Image-based visual servoing (IBVS) allows precise control of positioning and motion for relatively stationary targets using visual feedback. For IBVS, a mixture parameter $\beta$ allows better approximation of the image Jacobian matrix, which has a significant effect on the performance of IBVS. However, the setting for the mixture parameter depends on the camera's real-time posture; there is no clear way to define the change rules for most IBVS applications. Using simple model-free reinforcement learning, Q-learning, this article proposes a method to adaptively adjust the image Jacobian matrix for IBVS. If the state-space is discretized, traditional Q-learning encounters problems with the resolution that can cause sudden changes in the action, so the visual servoing system performs poorly. Besides, a robot in a real-world environment also cannot learn on as large a scale as virtual agents, so the efficiency with which agents learn must be increased. This article proposes a method that uses fuzzy state coding to accelerate learning during the training phase and to produce a smooth output in the application phase of the learning experience. A method that compensates for delay also allows more accurate extraction of features in a real environment. The results for simulation and experiment demonstrate that the proposed method performs better than other methods, in terms of learning speed, movement trajectory, and convergence time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call