Abstract

Artificial neural networks are often understood as a good way to imitate mind through the web structure of neurons in brain, but the very high complexity of human brain prevents to consider neural networks as good models for human mind;anyway neural networks are good devices for computation in parallel. The difference between feed-forward and feedback neural networks is introduced; the Hopfield network and the multi-layers Perceptron are discussed. In a very weak isomorphism (not similitude) between brain and neural networks, an artificial form of short term memory and of acknowledgement, in Elman neural networks, is proposed.

Highlights

  • Nowadays we are out of the illusion that computers can be good models for human mind

  • The human mind is the result of the biophysical structure of a nervous system in a body which evolved to survive in the environment, in communication with other individuals of same species and in relationship with other species of the ecosystem: its power is due to a very long and hard evolution and we are not able to understand its complexity [1]

  • In this conjunction the difference between connections is given by the context of functions, e.g. the connection I1, in the context of ΦH1(Σw(I1+I2+I3)), will have a different weight with respect to the I1 contained in ΦH2(Σw(I1+I2+I3)); so the whole function ΦH1(Σw(I1+I2+I3)), in the context of ΦO1{...}, will have a different value with respect to ΦH1(Σw(I1+I2+I3)) contained in ΦO2{...}. This way to explicate linearly the state of TLP shows its computational order, the internal relationships between the connections and their values, and contains the idea that the activation function (Φ) of a neuron is like a point of view on the weighted sum (Σw) of its incoming connections. To conclude this discussion about feedback and feed-forward networks, I want to introduce a hybrid class of neural networks which has interesting properties: the Elman networks (Figure 3)

Read more

Summary

Introduction

Nowadays we are out of the illusion that computers can be good models for human mind. (A)The goal that A.I. should attain is the emulation, through a computer, of some processes of mind in relationship with the environment (the world and other individuals) With respect to this objective I want to underline two obstacles in neural networks strategy: 1) and 2). The mind/brain translation problem will not be overcome until we will not have a clear theory about thought, consciousness, perception and action as cerebral phenomena If this theory wants to be useful to neural network strategy it must be conceived following the neural network philosophy and language. A theory who speaks the language of neural networks should consider thought (i.e. mental representations, planning, consciousness, memory and so on), perception and action not as “states” but as fluxes of states which go through the network (ordered and structured sets of states which go through the network). There are many kinds of structure of neural networks, but the architecture of the most common neural networks consists in a simple three layers structure of artificial neurons, like the three layers “perceptron” of Figure 1, that I will call TLP

Findings
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.