Abstract

The question of the nature of the distributed memory of neural networks is considered. Since the memory capacity of a neural network depends on the presence of feedback in its structure this question requires further study. It is shown that the neural networks without feedbacks can be exhaustively described based on analogy with the algorithms of noiseproof coding. For such networks the use of the term "memory" is not justified at all. Moreover, functioning of such networks obeys the analog of Shannon formula, first obtained in this paper. This formula allows to specify in advance the number of images that a neural network can recognize for a given code distance between them. It is shown that in the case of artificial neural networks with negative feedback it is really justified to talk about a distributed memory network. It is also shown that in this case the boundary between distributed memory of a neural network and information storage mechanisms in such elements as RS-triggers is diffuse. For the given example a specific formula is obtained, which connects the number of possible states of outputs of the network (and, hence, the capacity of its memory) with the number of its elements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call