Abstract

ABSTRACTArtificial neural networks are new methods for classification. We investigate two important issues in building neural network models; network architecture and size of training samples. Experiments were designed and carried out on two‐group classification problems to find answers to these model building questions. The first experiment deals with selection of architecture and sample size for different classification problems. Results show that choice of architecture and choice of sample size depend on the objective: to maximize the classification rate of training samples, or to maximize the generalizability of neural networks. The second experiment compares neural network models with classical models such as linear discriminant analysis and quadratic discriminant analysis, and nonparametric methods such as k‐nearest‐neighbor and linear programming. Results show that neural networks are comparable to, if not better than, these other methods in terms of classification rates in the training samples but not in the test samples.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.