Abstract

The optimal algorithm for on-line learning in the tree K-parity machine is studied. We introduce a set of recursion relations for the relevant probability distributions, which permit study of the general K case. The generalization error curve is determined and shown to decay to zero for large as even in the presence of noise. There is no critical noise level. The dynamics of on-line learning is studied analytically near the origin. In the absence of previous knowledge, the learning dynamics has a fixed point at . Previous knowledge is needed in at least K - 1 branches for the learning to take place.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call