Abstract

The author identifies several different neural network models which are related to nearest neighbor learning. They include radial basis functions, sparse distributed memory, and localized receptive fields. One way to improve the neural networks' performance is by using the cooperation of different learning algorithms. The prediction of chaotic time series is used as an example to show how nearest neighbor learning can be employed to improve Sanger's tree-structured algorithm which predicts future values of the Mackey-Glass differential delay equation. >

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call