Abstract

When a set of patterns is not linearly separable, the problem of designing and training a neural network for classification using discrete activation functions is NP-complete. For this reason, the main efforts of researchers in this area are aimed at designing efficient algorithms that produce good heuristic solutions. The majority of the reported results propose variations and modifications of the classical algorithm for perceptron training in order to obtain the number of neurons in the hidden layer and the corresponding matrix of weights. The algorithm presented transforms the original set of training patterns into a linearly separable set. When the procedure for verifying linear separability is applied, the weights corresponding to the output layer are obtained. As the end result of the proposed algorithm, a trained neural network is obtained that correctly classifies the set of patterns into K classes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.