Abstract

In this paper, starting from a general discussion on neural network dynamics from the standpoint of statistical mechanics, we discuss three different strategies to deal with the problem of pattern recognition in neural nets. Particularly we emphasized the role of matching the intrinsic correlations within the input patterns, to solve the problem of the optimal pattern recognition. In this context, the first two strategies, we applied to different problems and we discuss in this paper, consist essentially in adding either white noise or colored noise (deterministic chaos) on the input pattern pre-processing, to make easier for a classical backpropagation algorithm the class separation, respectively because the input patterns are too correlated among themselves or, on the contrary, are too noisy. The third more radical strategy, we applied to very hard pattern recognition problems in HEP experiments, consists in an automatic (dynamic) redefinition of the same net topology on the inner correlations of the inputs.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call