Abstract

It is shown that the k hidden units with asymptotic activation function are able to transfer any given k +1 different inputs to linearly independent GHUVs (generated hidden unit vectors) by properly setting weights and thresholds. The number of hidden units with the LIT (linearly independent transformation) capability for the polynomial activation function is limited by the order of polynomials. For analytic asymptotic activation functions and given different inputs, the LIT is a generic capability and a probability 1 capability in setting weights and thresholds randomly. It is a generic and a probability 1 property for any random input if the weight and threshold setting has LIT capability for some k +1 inputs. For three-layer nets with k hidden units, in which the activation function is asymptotic and the output layer is without activation function, they are sufficient to record k +1 arbitrary real samples. It is probability 0 to record k +2 random real samples if the activation is a unit step function. This is true for the sigmoid function in the case of associative memory. These conclusions lead to a scheme for understanding associative memory in the three-layer networks

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.