Abstract

One of the most well established brain principles, Hebbian learning, has led to the theoretical concept of neural assemblies. Based on it, many interesting brain theories have spawned. Palm’s work implements this concept through multiple binary Willshaw associative memories, in a model that not only has a wide cognitive explanatory power but also makes neuroscientific predictions. Yet, Willshaw’s associative memory can only achieve top capacity when the stored vectors are extremely sparse (number of active bits can grow logarithmically with the vector’s length). This strict requirement makes it difficult to apply any model that uses this associative memory, like Palm’s, to real data. Hence the fact that most works apply the memory to optimal randomly generated codes that do not represent any information. This issue creates the need for encoders that can take real data, and produce sparse representations - a problem which is also raised following Barlow’s efficient coding principle. In this work, we propose a biologically-constrained network that encodes images into codes that are suitable for Willshaw’s associative memory. The network is organized into groups of neurons that specialize on local receptive fields, and learn through a competitive scheme. After conducting auto- and hetero-association experiments on two visual data sets, we can conclude that our network not only beats sparse coding baselines, but also that it comes close to the performance achieved using optimal random codes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call