Abstract

In experimental and theoretical neuroscience, synaptic plasticity has dominated the area of neural plasticity for a very long time. Recently, neuronal intrinsic plasticity (IP) has become a hot topic in this area. IP is sometimes thought to be an information-maximization mechanism. However, it is still unclear how IP affects the performance of artificial neural networks in supervised learning applications. From an information-theoretical perspective, the error-entropy minimization (MEE) algorithm has newly been proposed as an efficient training method. In this study, we propose a synergistic learning algorithm combining the MEE algorithm as the synaptic plasticity rule and an information-maximization algorithm as the intrinsic plasticity rule. We consider both feedforward and recurrent neural networks and study the interactions between intrinsic and synaptic plasticity. Simulations indicate that the intrinsic plasticity rule can improve the performance of artificial neural networks trained by the MEE algorithm.

Highlights

  • Artificial neural networks with nonlinear processing elements are designed to deal with the troublesome problem of nonlinear and nonstationary signal processing

  • Results of the recurrent neural networks (RNN) As in the case of the feedforward neural networks (FNN), we discuss how the recurrent neural network handles the problem of single-step prediction using the same data sets

  • Combining the Minimization of the Error Entropy (MEE) algorithm as the synaptic plasticity rule and the information-maximization algorithm as the intrinsic plasticity rule, we proposed a synergistic information-theoretic learning algorithm for training artificial neural networks

Read more

Summary

Introduction

Artificial neural networks with nonlinear processing elements are designed to deal with the troublesome problem of nonlinear and nonstationary signal processing. In a supervised learning problem, we are provided with a training data set containing the input, x, and the desired output (target), d, and we aim at finding the input-output mapping that models the complicated relationship between x and d. To solve such a problem, we can employ an artificial neural network trained by an appropriate learning algorithm to infer the mapping implied by the training data. Most current learning algorithms for artificial neural networks in applications rely on updating the connection weights w among neurons This is often done with the aim of minimizing the mean square error (MSE) between the network output y and the desired output d over all input-target pairs, where the error is defined as e~Ed{yE. This is the form of synaptic plasticity we consider in this article

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call