Abstract

In this paper, we present a spiking neural network architecture that autonomously learns to control a 4 degree-of-freedom robotic arm after an initial period of motor babbling. Its aim is to provide the joint commands that will move the end-effector in a desired spatial direction, given the joint configuration of the arm. The spiking neurons have been simulated according to Izhikevich's model, which exhibits biologically realistic behaviour and yet is computationally efficient. The architecture is a feed-forward network where the input layers encode the intended movement direction of the end-effector in spatial coordinates, as well as the information that is given by proprioception about the current joint angles of the arm. The motor commands are determined by decoding the firing patterns in the output layers. Both excitatory and inhibitory synapses connect the input and output layers, and their initial weights are set to random values. The network learns to map input stimuli to motor commands during a phase of repetitive action-perception cycles, in which Spike Timing-Dependent Plasticity (STDP) strengthens synapses between neurons that are correlated and weakens synapses between uncorrelated ones. The trained spiking neural network has been successfully tested on a kinematic model of the arm of an iCub humanoid robot.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call