Abstract

A new learning scheme, called projection learning (PL), for self-organizing neural networks is presented. By iteratively subtracting out the projection of the twinning neuron onto the null space of the input vector, the neuron is made more similar to the input. By subtracting the projection onto the null space as opposed to making the weight vector directly aligned to the input, we attempt to reduce the bias of the weight vectors. This reduced bias will improve the generalizing abilities of the network. Such a feature is important in problems where the in-class variance is very high, such as, traffic sign recognition problems. Comparisons of PL with standard Kohonen learning indicate that projection learning is faster. Projection learning is implemented on a new self-organizing neural network model called the reconfigurable neural network (RNN). The RNN is designed to incorporate new patterns online without retraining the network. The RNN is used to recognize traffic signs for a mobile robot navigation system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.