Abstract

Though succeeding in solving various learning tasks, most existing reinforcement learning (RL) models have failed to take into account the complexity of synaptic plasticity in the neural system. Models implementing reinforcement learning with spiking neurons involve only a single plasticity mechanism. Here, we propose a neural realistic reinforcement learning model that coordinates the plasticities of two types of synapses: stochastic and deterministic. The plasticity of the stochastic synapse is achieved by the hedonistic rule through modulating the release probability of synaptic neurotransmitter, while the plasticity of the deterministic synapse is achieved by a variant of a reward-modulated spike-timing-dependent plasticity rule through modulating the synaptic strengths. We evaluate the proposed learning model on two benchmark tasks: learning a logic gate function and the 19-state random walk problem. Experimental results show that the coordination of diverse synaptic plasticities can make the RL model learn in a rapid and stable form.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call