Abstract

It has been proposed that invariant pattern recognition might be implemented using a learning rule that utilizes a trace of previous neural activity which, given the spatio-temporal continuity of the statistics of sensory input, is likely to be about the same object though with differing transforms in the short time scale. Recently, it has been demonstrated that a modified Hebbian rule which incorporates a trace of previous activity but no contribution from the current activity can offer substantially improved performance. In this paper we show how this rule can be related to error correction rules, and explore a number of error correction rules that can be applied to and can produce good invariant pattern recognition. An explicit relationship to temporal difference learning is then demonstrated, and from this further learning rules related to temporal difference learning are developed. This relationship to temporal difference learning allows us to begin to exploit established analyses of temporal difference learning to provide a theoretical framework for better understanding the operation and convergence properties of these learning rules, and more generally, of rules useful for learning invariant representations. The efficacy of these different rules for invariant object recognition is compared using VisNet, a hierarchical competitive network model of the operation of the visual system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call