Abstract

In this paper, the object of the research is the implementation of artificial neural networks (ANN) on FPGA. The problem to be solved is the construction of a mathematical model used to determine the compliance of FPGA computing resources with the requirements of neural networks, depending on their type, structure, and size. The number of its LUT (Look-up table – the basic FPGA structure that performs logical operations) is considered as a computing resource of the FPGA. The search for the required mathematical model was carried out using experimental measurements of the required number of LUTs for the implementation on the FPGA of the following types of ANNs: – MLP (Multilayer Perceptron); – LSTM (Long Short-Term Memory); – CNN (Convolutional Neural Network); – SNN (Spiking Neural Network); – GAN (Generative Adversarial Network). Experimental studies were carried out on the FPGA model HAPS-80 S52, during which the required number of LUTs was measured depending on the number of layers and the number of neurons on each layer for the above types of ANNs. As a result of the research, specific types of functions depending on the required number of LUTs on the type, number of layers, and neurons for the most commonly used types of ANNs in practice were determined. A feature of the results obtained is the fact that with a sufficiently high accuracy, it was possible to determine the analytical form of the functions that describe the dependence of the required number of LUT FPGA for the implementation of various ANNs on it. According to calculations, GAN uses 17 times less LUT compared to CNN. And SNN and MLP use 80 and 14 times less LUT compared to LSTM. The results obtained can be used for practical purposes when it is necessary to make a choice of any FPGA for the implementation of an ANN of a certain type and structure on it

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call