Abstract

The artificial intelligence inference model is currently the best mathematical model based on artificial neural networks. The intelligent reasoning mode based on artificial intelligence has a large construction scale. In a completed inference, many multiplication and addition operations need to be completed. To establish an effective artificial intelligence inference model, its computational complexity is dozens or even more times higher than traditional artificial intelligence algorithms. This article focused on the research on the isomerization acceleration of FPGA (Field Programmable Gate Array). This article was looking for a method that reduces both system changes and migration, while maximizing the flexibility of FPGA programmability. In order to build on this foundation, a fast interface for fast computation using FPGA was designed. The characteristic of this project was that it could combine the FPGA acceleration platform with ordinary computers or servers, and could effectively utilize traditional computer software and toolchains. This article compared the computational time consumption between the inference model and the CPU. In the scale calculation logic scenario of the inference model designed in this article, it could achieve the same computing power as the CPU when processing 70000 multiplication calculations. The experimental results indicated that the increase in CPU computing power was greater than that of the inference model. This indicated that the larger the amount of computational data, the more significant the acceleration effect of the inference model in this article would be.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call