Abstract
One of the important theoretical issues studied by neural network researchers is how large should the network be to realize an arbitrary set of training patterns. Baum (1988) considered the case of two-class classification problems, where the input vectors are in general position. By general position the author means that no D+1 vectors lie on a (D-1)dimensional hyperplane. He proved that [M/D] hidden nodes are both necessary and sufficient for implementing any arbitrary dichotomy, where M denotes the number of examples, D denotes the dimension of the pattern vectors, and (x) means the smallest integer >or=x. Buang and Huang (1991) and Sartori and Antsaklis (1991) proved that for the case that the general position condition does not hold, M-1 hidden nodes are sufficient for implementing analog mappings. In this paper the author considers analog mappings (real-valued input vectors and real-valued scalar outputs), and assumes the general position condition. It is proved that 2(M/D) hidden nodes are sufficient for implementing arbitrary mappings. >
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.