The paper presents the hardware architecture design of a spiking neural network (SNN) based on dendritic computation principles. The integration of active dendritic properties into the neuronal structure of SNN aims to minimize the number of functional blocks required for hardware implementation, including synaptic connections and neurons. The available memory on the neuromorphic architecture imposes limitations on implementation, hence the need to reduce the number of functional blocks. As a test task for the SNN based on dendritic computations, we selected the image classification of eight symbols, consisting of digits one through eight. These symbols are depicted as 3×7 pixel, 1-bit images. Active dendritic properties were analyzed using the “delay plasticity” [1] principle, which introduces the mechanism of adjusting input signal delays in spiking neuron inputs. We designed an SNN model with complementary delay inputs, referred to as the active dendrite SNN, as a principle implementation. Input spikes arriving at the primary inputs are duplicated to the delay inputs after a modifiable time delay. For convenience, each delay input was set at a single value. The input images were scanned sequentially. The neural network received three direct and three inverse inputs from the six main inputs that were coded with spikes corresponding to three pixels of a string. An “on” pixel was coded with a spike arriving at a direct input, while an “off” pixel was coded with a spike arriving at the corresponding inverse input. The line scanning time was 10 μs, input width was 1 μs, and delay time was 5 μs. The optimization of spiking neuron parameters was performed through a stochastic search algorithm based on simulated annealing. The parameters optimized for the Leaky-Integrate-and-Fire (LIF) neurons included the leakage time constant (22.8 μs), firing threshold (1150 arbitrary units), and refractory period (1 μs). The active dendrite SNN training employed the tempotron learning rule [2]. The training optimized the following parameters: the maximum change in synaptic weight on potentiation and depression (0.7 and –3 arbitrary units, respectively) and the synaptic weight’s upper bound (195 arbitrary units). Complementary delayed inputs facilitated the learning of the order in which input patterns arrived for SNN neurons during training. The paper compares an SNN architecture based on dendritic computations to our previously designed two-layer SNN with a hidden perceptron layer and an output layer consisting of LIF neurons [3]. Using the same LIF neuron design, input image coding, and LIF neuron layer structure as in the proposed architecture, our two-layer SNN with a hidden perceptron layer and output layer of LIF neurons successfully recognized 3×5 images of three symbols with only 10 neurons and 63 synapses. Alternatively, the active dendrite SNN was able to recognize 3×7 images of eight symbols with four neurons and 48 synaptic weights. In conclusion, incorporating active dendrite properties into the SNN architecture for image recognition resulted in optimized functional block usage, lowering the number of neurons and synapses used by 60 and 24%, respectively.
Read full abstract