Abstract

We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits.

Highlights

  • In neural computation, understanding the computational and dynamical properties of biological neural networks is an issue of central importance

  • Hierarchical Classification of Neural Networks Our notion of an attractor refers to a set of states such that the behaviour of the network could forever be confined into that set of states

  • We provide a generalisation to this precise infinite input stream context of the classical equivalence result between Boolean neural networks and finite state automata [1,2,3]

Read more

Summary

Introduction

In neural computation, understanding the computational and dynamical properties of biological neural networks is an issue of central importance. The first and seminal results in this direction were provided by McCulloch and Pitts, Kleene, and Minsky who proved that firstorder Boolean recurrent neural networks were computationally equivalent to classical finite state automata [1,2,3]. Kremer extended these results to the class of Elman-style recurrent neural nets [4], and Sperduti discussed the computational power of different other architecturally constrained classes of networks [5]. The computational equivalence between so-called ‘‘rational recurrent neural networks’’ and Turing machines has become standard result in the field

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.