Abstract

Shows that a robot with the hand-eye system can learn the hand-reaching task without apparent calculations of the target, obstacle, and hand locations. The system consists of a neural network whose inputs are raw visual sensory signals, joint angles of the arm, and the existence of the obstacle. It is trained by reinforcement learning, and the reward is given only when the hand reaches the target that can be found only on the visual sensor. In order to show the effectiveness of this learning, the following three assumptions are introduced. (1) The target, obstacle and hand cannot be distinguished in the vision. (2) The hand disappears out of the visual field. (3) The obstacle appears randomly but the location is always the same. The initial hand location and the target location are chosen randomly at each trial. After the learning, the robot could reach its hand to the target. By the analysis of the hidden neurons' representation after the reinforcement learning, it was found that the target location was not represented independently from the hand location, either in the work (visual sensory) space or in the joint space. Furthermore, the representation of the hand location is acquired by mixing the joint angles and visual signals.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call