Abstract

We argue that atomistic learning—learning that requires training only on a novel item to be learned—is problematic for networks in which every weight is available for change in every learning situation. This is potentially significant because atomistic learning appears to be commonplace in humans and most non-human animals. We briefly review various proposed fixes, concluding that the most promising strategy to date involves training on pseudo-patterns along with novel items, a form of learning that is not strictly atomistic, but which looks very much like it ‘from the outside’.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call