Abstract
The Feynman machine is a neural network model in which the spike-timing-dependent firing process is described through a path integral formulation. In addition, the gradient descent in the free energy is proposed as an ideal learning rule of the model system. The unique formulation of the Feynman machine is useful for studying the substance of the firing and the learning process in a spiking neural network; however, the implementation of the Feynman machine is not a plan problem because of the difficulty in calculating the free energy. We here introduce how to perform the simulation of both the firing and the learning processes in the Feynman machine through the Monte Carlo or the numerical integral method. We demonstrate the adequacy of the methods by applying them to the firing and the learning processes in some neural systems.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have