Abstract

We propose an approach for learning the fine-positioning of a parallel-jaw gripper on a robot arm using visual sensor data as controller input. The first component of the model can be viewed as a perceptron network that projects high-dimensional input data into a low-dimensional eigenspace. The dimension reduction is efficient if the movements achieving optimal positioning are constrained to a local scenario. The second component is an adaptive fuzzy controller serving as an interpolator whose input space is the eigenspace and whose outputs are the motion parameters. Instead of undergoing cumbersome hand–eye calibration processes, our system is trained in a self-supervised learning procedure using systematic perturbation motion around the optimal position. The approach is applied to tasks of three degrees of freedom, e.g. translating the gripper in the x– y-plane and rotating it about the z-axis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call