Abstract

Feedforward neural network (NN) models approximate nonlinear functions that connect inputs to outputs by repeated applications of simple nonlinear transformations. By combining this feature of NN models with traditional multivariate analysis (MVA) techniques, nonlinear versions of the latter can readily be constructed. In this paper, we examine various properties of nonlinear MVA by NN models in two specific contexts: Cascade Correlation (CC) networks for nonlinear discriminant analysis simulating the learning of personal pronouns, and a five-layer auto-associative network for nonlinear principal component analysis (PCA) finding two defining features of cylinders. We analyze the mechanism of function approximations, focussing, in particular, on how interaction effects among input variables are captured by superpositions of sigmoidal transformations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call