Abstract

Spiking neural networks (SNNs) have been getting more research attention in recent years as the way they process information is suitable for building neuromorphic systems effectively. However, realizing SNNs on hardware is computationally expensive. To improve their efficiency for hardware implementation, a field-programmable gate array (FPGA) based SNN accelerator architecture is proposed and implemented using approximate arithmetic units. To identify the minimal required bit-width for approximate computation without any performance loss, a variable precision method is utilized to represent weights of the SNN. Unlike the conventional reduced precision method applied to all weights uniformly, the proposed variable precision method allows different bit-widths to represent weights and provide the feasibility of maximizing truncation effort for each weight. Four SNNs adopting different network configurations and training datasets are established to compare the performance of proposed accelerator architecture using the variable precision method with the proposed one using the conventional reduced precision method. Based on the experimental results, more than 40% of the weights require less bit-width when applying the variable precision method instead of the reduced precision method. With the variable precision method, the proposed architecture achieves 28% fewer ALUTs and 29% less power consumption than the proposed one using the reduced precision method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call