Abstract

Abstract Various combinations of perceptual features are relevant for learning and action-selection. However, the storage of all possible feature combinations presents computationally impractical, and psychologically implausible, memory requirements in non-trivial environments due to a state-space explosion. Some psychological models suggest that feature combinations, or chunks, should be generated at a conservative rate (Feigenbaum and Simon, 1984). Other models suggest that chunk retrieval is based on statistical regularities in the environment, i.e. recency and frequency (Anderson and Schooler, 1991). We present a computational model for chunk learning based on these two principles, and demonstrate how combining these principles alleviates state-space explosion, producing exponential memory savings while maintaining a high level of performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call