The artificial neural network (ANN) with a single layer has a limited capacity to process data. Multiple neurons are connected in the human brain, and the actual capacity of the brain lies in the interconnectedness of multiple neurons. As a specified generalization of ANN deep learning makes use of two or more hidden layers, which implies that a greater number of neurons are required to construct the model. A network that has more than one hidden layer, also known as two or more hidden layers, is referred to as a deep neural network, and the process of training such networks is referred to as deep learning. The research article focuses on the design of a multilayer or deep neural network presented for the target field programmable gate array (FPGA) device spartan-6 (xc6stx4-2t9g144) FPGA. The simulation is carried out using Xilinx ISE and ModelSim software. There are two hidden layers in which (2×1) multiplexer blocks are utilized for processing twenty neurons into the output of ten neurons in the first hidden layer and demultiplexers (1×2) and vice versa. The hardware utilization is estimated on FPGA to compute the performance of the deep neural hardware chip based on memory, flip flops, delay, and frequency parameters. The design is scalable and applicable to various FPGA devices, which makes the work novel. FPGA-based neuromorphic hardware acceleration platform with high speed and low power for discrete spike processing on hardware with great real-time performance.
Read full abstract