Abstract
Independent feature extraction, i.e independent component analysis, was formulated in Chapter 4 as a search for an invertible, volume preserving map which statistically decorrelates the output components in the case of an arbitrary, possibly non-Gaussian input distribution. The same chapter discussed in detail the case where input-output maps are linear, i.e. matrices. The first extension to nonlinear transformation was carried out in Chapter 5 where stochastic neural networks were used to perform statistical decorrelation of Boolean outputs. This chapter further extends nonlinear independent feature extraction by introducing a very general class of nonlinear input-output deterministic maps whose architecture guarantees bijectivity and volume preservation. The criteria for evaluating statistical dependence are those defined in Chapter 4: the cumulant expansion method and the minimization of the mutual information among output components. Atick and Redlich [6.1] and especially the two papers of Redlich [6.2]–[6.3] use similar information theoretic concepts and reversible cellular automata architectures in order to define how nonlinear decorrelation can be performed. Taylor and Coombes [6.4] presented an extension of Oja’s learning rule for polynomial, i.e. higher order neural networks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.