Abstract

A neural network model that can learn higher order correlations within the input data without suffering from the combinatorial explosion problem is introduced. The number of parameters scales as M ̃ × N, where M ̃ is the number such that no higher order network with less than M ̃ higher order terms can implement the same input data set and N is the dimensionality of the input vectors. In order to have better generalization, the model was designed to realize a supervised learning such that after learning, output for any input vector is the same as the output of a higher order network that implements the same input data set using M ̃ number of higher order terms. Unlike the case in product units, the local minima problem does not pose itself as a severe problem in the model. Simulation results for some problems are presented and the results are compared with the results of a multilayer feedforward network. It is observed that the model can generalize better than the multilayer feedforward network.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.