Abstract

A hardware-friendly bisection neural network (BNN) topology is proposed in this work for approximately implementing massive pieces of complex functions in arbitrary on-chip configurations. Instead of the conventional reconfigurable fully connected neural network (FC-NN) circuit topology, the proposed hardware-friendly topology performs NN behaviors in a bisection structure, in which each neuron includes two constant synapse connections for both inputs and outputs. Compared with the FC-NN one, the reconfiguration of the BNN circuit topology eliminates the remarkable amount of dummy synapse connections in hardware. As the main target application, this work aims at building a general-purpose BNN circuit topology that offers a great amount of NN regressions. To achieve this target, we prove that the NN behaviors of the FC-NN circuit topologies can be migrated to the BNN circuit topologies equivalently. We introduce two approaches including the refining training algorithm and the inverted-pyramidal strategy to further reduce the number of neurons and synapses. Finally, we conduct the inaccuracy tolerance analysis to suggest the guideline for ultra-efficient hardware implementations. Compared with the state-of-the-art FC-NN circuit topology-based TrueNorth baseline, the proposed design can achieve 17.8- 22.2× hardware reduction and less than 1% inaccuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call