Abstract

In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover n-node networks with robust storage of memories for any . In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely.

Highlights

  • In their seminal work, McCulloch and Pitts [1] developed a theory of discrete recurrent neural networks (DRNNs) that simultaneously contained a model for spike trains, a computational theory of mind [2], and the start of circuit design for programmable electronic computers [3]

  • We shall focus here on the problem of learning in the special case of Hopfield networks [4], which are McCulloch–Pitts networks with symmetric weights having dynamics on states that always result in fixed-point attractors

  • We construct Hopfield networks with robust exponential memory in hypergraphs, and we show that minimum energy flow (MEF) can be used to efficiently learn them (Theorem 2)

Read more

Summary

Introduction

McCulloch and Pitts [1] developed a theory of discrete recurrent neural networks (DRNNs) that simultaneously contained a model for spike trains (sequences of action potentials in neural activity), a computational theory of mind [2], and the start of circuit design for programmable electronic computers [3]. Many variations of these concepts have since guided research in artificial intelligence and neuroscience.

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call